Letter written in response to article on brain fingerprinting in "The Register" and FAQ on Brain Fingerprinting

Dear sir,

I recently read Mr. Greene's article attacking "brainwave fingerprinting" and its scientific feasibility. It was good for a laugh, but because of the importance of informed discussion on the subject, to which Mr. Greene has contributed nothing, I felt the need to respond.

First and foremost, it is painfully clear that his knowledge of the field consists of a casual scan of an article which set off a few warning flags and sent him to view Dr. Farwell's web page just long enough to decide he was an expert on the subject and could comment intelligently on it. Armed with a dangerous little learning, he gleefully began his sneering article in which he compares amazingly well grounded scientific research to phrenology. What is most frustrating about some of the ignorant comments from Mr. Greene and one of his readers is that the false statements they make so clearly reflect that they devoted no effort to actually learning about the field they mock. Some of the information contained in Dr. Farwell's page, as well as links from it, contradict many of the frothing fabrications Mr. Greene made up in order to appear clever in his article.

Of course, I do not mean to state that people who are not experts in the field should not feel welcome in the debate. I do think it reasonable to suggest that readers should put in a reasonable amount of time to investigate their questions before airing their views in a public forum. There is plenty of information about this technology which is easily available from Dr. Farwell's websites and numerous other publicly available forums. If you believe you have nailed a fatal flaw in a technology, ask before you attack. Mr. Greene is a man who is paid to act as a journalist and thus research his articles. His decision to write articles for the popular media without spending even a few minutes to check his wild and incorrect assumptions can only be attributed to laziness, sloppiness, and the puerile glee he evidently derives from his own public verbal masturbation.

There are two components to Mr. Greene's attack: an effort to discredit and undermine the scientific validity of Dr. Farwell's approach and its supporters, and commentary on the practical and ethical issues of the widespread distribution and use of the technology. On the latter point, I agree that such technology may understandably evoke concerns about an Orwellian nightmare. Mr. Greene is afraid of the government or big business requiring people to undergo brain scans in order to get a job or do other routine tasks. So am I. I agree that information which is collected for the purpose of ensuring someone is not a terrorist should not be analyzed for other factors, and that such analysis should constitute an illegal search. I have been in contact with Mr. Kirsch and, contrary to Mr. Greene's article, he does recognize that it would be of paramount importance to ensure that such information be used anonymously and not be applied to psychological profiling. Mr. Kirsch said this repeatedly in his article. Again, Mr. Greene's decision to prioritize the whims of his overexcited language area over reality have resulted in his uninformed attack which embarrasses himself, misleads the public, and does a disservice to your readers.

On to Mr. Greene's efforts to belittle the research and its acceptance in the scientific community. He states that Dr. Farwell's:

Gibson-esque self-promoting Web site appears to be little more than a gallery of all the dupes he's taken in. These include US Senator Charles Grassley (Republican, Iowa), 60 Minutes correspondent Mike Wallace, and a slew of newspapers, magazines, and TV and radio producers -- along with our Kirsch.

Mr. Greene shoots himself in the foot here, as this is already a nontrivial list. A few of the TV production "dupes" include CNN, ABC, CBS, The Discover Channel, and Psychology Today. Worse, Mr. Greene selectively ignores the much broader gallery of those who have reviewed this work. It has been published in numerous widely respected and peer reviewed scientific journals, including Psychophysiology, Electroencephalography and Clinical Neurophysiology, and Journal of Forensic Sciences. It has been presented at dozens of meetings of informed professionals such as the Society for Psychophysical Research, American Psychiatric Association, and American Polygraph Association. As Mr. Greene would know if he had the barest fraction of the knowledge of scientific procedures he likes to pretend he has, you cannot get articles in to these without them being reviewed by a panel of objective scientists. As Dr. Farwell's doctoral thesis on the subject was approved by a faculty board from the University of Illinois at Urbana-Champaign (which is well known for its excellent research in cognitive electrophysiology), he must have also duped the five faculty who scrutinized his research and many others in that department. His research was ruled admissible in a district court on a murder case. While I am not a legal expert, I trust that this would not have happened without a thorough review process. Oh, and let's not forget hundreds more police officers, polygraph experts, FBI and CIA agents, forensic specialists, and others who have tried hard to find holes in Dr. Farwell's technolgy without success. Finally, as the brainwave complex which Dr. Farwell uses (called the P300 complex) is very, very well studied, and I do not know of a single article suggesting that it does not reflect memory, Mr. Greene is effectively calling thousands of scientists of dupes in duplicity research.

What is most frustrating about Mr. Greene's article is that all of the information in this paragraph is easily accessible on Dr. Farwell's site. All he had to do was click on the very clearly labelled link for Dr. Farwell's references.

Even more sloppy are Mr. Greene's next two paragraphs:

The technique has "a record of one-hundred per cent accuracy in research on FBI agents, research with US government agencies, and field applications," Farwell gushes.

He neglects to mention that the FBI trial involved a scant twenty-one test subjects, however -- hardly the makings of anything approaching statistical significance.

Mr. Greene neglects to mention that the particular paper he attacks is only one of many showing the same effect. Again, even a cursory glance at Dr. Farwell's references would have shown that he has numerous other publications showing the same effect on other groups. He has easily run hundreds of people through his approach. To isolate one paper and attack it for having 21 subjects would be like isolating Mr. Greene's article and assuming its standards are typical of The Register. I am at a loss as to why Mr. Greene seems to think he knows more about statistics than Dr. Farwell, the FBI, or scientific review boards. I imagine Mr. Greene is now grabbing a dictionary or calling a friend to find out what statistical significance means, something he should have done before printing his article. It is true that the FBI trial involved 21 subjects. This is more than most published scientific articles, in which perhaps a dozen subejcts are considered adequate. I urge Mr. Greene to try publish his new mathematical approach to statistics in a peer reviewed journal and send me a copy of the publisher's reply.

Next comes journalism at its sloppiest:

Farwell also exhibits a Gibson-esque fascination with polysyllabic techno-gobbledygook.

"Words or pictures relevant to a crime are flashed on a computer screen, along with other, irrelevant words or pictures. Electrical brain responses are measured non-invasively through a patented headband equipped with sensors. A specific brain-wave response called a MERMER (memory and encoding related multifaceted electroencephalographic response) is elicited when the brain processes noteworthy information it recognizes. (The MERMER contains another, well known and scientifically established brain response known as a P300.)," he tells us.

"When details of the crime that only the perpetrator would know are presented, a MERMER is emitted by the brain of a perpetrator, but not by the brain of an innocent suspect. In Farwell Brain Fingerprinting, a computer analyzes the brain response to detect the MERMER, and thus determines scientifically whether or not the specific crime-relevant information is stored in the brain of the suspect."

Note the repeated use of the adverb 'scientifically' -- a mannerism much in evidence among marketing copywriters, and charlatans.

Admittedly, I am used to reading scientific terminology and am not the most objective judge, but Dr. Farwell's two paragraphs reprinted here seem very straightforward to me. True, big words are required, and this must have overloaded the few active regions of Mr. Greene's brain. This is the sort of ostrich maneuver which leads to burning astronomers at the stake. If Mr. Greene is confused by terminology, and indeed any word exceeding a syllabule, perhaps he should ask for help before publishing an attack. Note the repeated use of the adverb 'scientifically' -- a mannerism much in evidence among scientists.

I shall return to Mr. Greene's article shortly. Before then, I would like to continue addressing attacks on the scientific merit of this technology, one of which comes from DCVULTURE, which is Mr. Greene's pseudonym: 

He also has no data on uncooperative subjects -- he's assuming that you can't decline to cooperate if you agree to participate. i'd be a bit more confident if we had a little empirical data on that.

This is another example of a comment which would have been obviated with a little more research. Dr. Farwell published this 10 years ago and showed that the system in fact works better on uncooperative subjects. DCVULTURE is a professional journalist and really should have checked on this point before printing it. I urge readers not to follow Mr. Greene's lead and instead try to learn about their questions before assuming and printing an incorrect answer. I did point this error out to Mr. Greene in subsequent correspondence, and presented the reference very clearly (which is on Larry's website):

Farwell, L. A. and Donchin, E. (1991) The Truth Will Out: Interrogative Polygraphy ("Lie Detection") With Event-Related Brain Potentials. Psychophysiology, 28:531-547.

DCVULTURE later asks:

I'd also like to see some test data on highly imaginative subjects such as artists, writers, actors, etc. i'd like to see what problems a vivid imagination in combination with a fair bit of publicly-available information would create for the tests. and i'd like to know what effect a deep familiarity with games like Counter-Strike would have.

These are both excellent questions. First, a vivid imagination does not have a significant effect on the system's performance. Second, familiarity with terrorist technologies could indeed create false positives if questions and images are not selected intelligently. I have a separate little essay on the question of how to avoid false positives among those who study terrorism but are not terrorists which I will be happy to share if anyone wants it.

A reader, BENLODGE, asks:

What i would like to know is whether or not this is pseudo-science. The Register presents it as such and then almost validates it at the end of the article. Even if it is valid, whats the differance between that and a lie detector test?

It is not psuedoscience. Take Mr. Greene's word if you wish, or listen to a scientist (in fact, thousands of us). It is completely different from a conventional lie detector test. In it, examiners look for specific changes associated with nervousness, such as increased heartbeat and sweating. It is assumed that people are more nervous when they lie. The brainwave based approach is not based on nervousness. Thus, it cannot be fooled by classic tricks like trying to act very calm throughout the test, taking a depressant drug, acting nervous all the time, etc. I would like to thank the readers for asking good questions, rather than reverting to Mr. Greene's tactic of saying something like, "This complicated neuromancery is just the same thing as a lie detector test."

Back to the rest of Mr. Greene's article. Those enjoying my vituperative stance to date (perhaps just me) are in for a disappointment, since I think Greene brings up some concerns with are worth attention. As I said above, I agree that we as a society need to keep a very close eye on how this technology develops and is implemented. It could be misused for disastrous effect. It gets into fourth and fifth amendment issues, personal privacy and security, you name it. Had Mr. Greene focused his article on ethical aspects of implementing this technology, rather than fabricating paper tigers to attack the technology itself, this would be a letter of support. To say that we should avoid discussing this technology simply because it seems too complicated or scary is like arguing against discussing bioethics because we find cloning too intimidating and confusing. Even worse than the head in the sand approach is Mr. Greene's Stalin tactic of lying to readers to make a technology appear less viable and more threatening than it is.

Mr. Greene writes, "But nothing's going to prevent the government or big business from associating personally-identifying data with these profiles one day down the road." The only thing that could prevent it is an informed populace insisting on very strict guidelines on how the data could and could not be used. To have an informed populace, we need correct information percolating through a society which is willing to commit a reasonable amount of effort to studying, pondering, and discussing important topics.

To borrow a comment from Mr. Kirsch, El Al has been grilling passengers for decades to reduce the risk of terrorist attack. A skeptic might have said, "Nothing's going to prevent the government from asking questions about people's sex lives someday." El Al does not. Why? Largely because the Israeli populace recognizes the need for such questioning for safety reasons, but would also be up in arms if questions strayed to the irrelevant or invasive.

In conclusion, I am happy to see that brain computer interface technology such as brainwave fingerprinting is beginning to get the attention it deserves. Many people do not recognize that the tremendous advances in neuroscience, mathematics, and computation in the last decade have presented our society with options never before seen. You can learn more from the EEG than ever before. These developments have tremendous implications not just for the classic security vs. freedom debate, but also for medicine, communication, interface design, therapy, forensics, and much more. Just as with other scientific advances, the best way to ensure that this technology is used to benefit rather than torment or marginalize people is to learn about it and think about it. Mr. Greene has done a great disservice to his readers by clouding the debate with his incorrect, snide, poorly researched article. In trying to appear the enlightened curmudgeon, he has instead spewed his transparent ignorance under the guise of journalism. I hereby challenge Mr. Greene to a public debate on the scientific value of brainwave fingerprinting. I am an easier target than Dr. Farwell; I don't even have a PhD yet. Otherwise, I recommend that Mr. Greene confine his attacks on pseudoscience to the practice of phrenology, though with his scientific perspicacity and research ethic, he may be duped into supporting it.

Brendan Allison, MS, PhD student
Brain computer interface project director
Cognitive Neuroscience Laboratory
Dept. of Cognitive Science
UC San Diego

Note: To avoid accusations that my comments are solely for personal gain, which Mr. Greene suggests of Dr. Farwell, I wish to include this disclaimer. I do not work with Dr. Farwell, have never met him, and do not stand to benefit in any way from the success of his company. I do not have any involvement in any company involved in brainwave fingerprinting. My comments are inspired only by my firm belief that this technology needs to be discussed openly and intelligently and my frustration and anger with Mr. Greene for obfuscating this discussion.


From: Denis Bélisle [mailto:denis@gunt.com]
Sent: Tuesday, October 09, 2001 9:52 PM
To: thomas.greene@theregister.co.uk; steve.kirsch@propel.com
Subject: brain fingerprinting

Gentlemen,

Just a note to let you know that Larry Farwell, although certainly a pioneer in the application of evoked potentials to the detection of deception, by no means has the monopoly on this subject. As an example, this year's annual congress of the Society for Psychological Research is being held this week in Montreal where a few works on that topic are being presented - the titles of which you may find at the end of this message.

I have been very interested in that field myself for some years now and I am currently involved in research using high density recording of evoked potentials in cognitive studies related to the detection of deception. Last week I responded privately to Thomas' first article on the subject, stating broadly my opinion on the subject of CNS measurements and the detection of deception, and, since there seems to be a kind of a debate here, I would like to squeeze myself in with some more thoughts on the matter.

Like I said to Thomas, there is no doubt in my mind that the future of detection of deception lies in CNS measurements. But, at the moment, there is no way to tell the extent to which it will be usable - but it will be, and in some ways it already is, quite efficient. I just don't believe it will ever be ready enough to be accepted as a general measure of control.

Evoked potentials (EP), or any signal appropriated through the scalp, are minute in nature and manifest themselves irregularly. Electrically speaking, the brain is a very noisy place and in order to detect a clear indication that a few hundreds or thousands neurons just fired at the same time, you have to be somehow lucky. Science not being to fond of chance, that is why the usual method used to study EP is to have a series of trials, at least 40 or 50, often a hundred or more, those signals are then averaged, hereby eliminating background noise and erratic behavior of the EP. This is for one target - usually you need more than one target, some irrelevant ones for example, to be able to make a comparison. So, all in all, pretending to steadily achieve conclusiveness with EP protocols in under 10 minutes would be unrealistic. You will have to add to that time some kind of resting period for the brain to calm and bring down it's background noise. Even in controlled lab conditions, a subject that has been exposed to moderately stimulating circumstances just prior to recording will not yield good results. Subjects in cognitive or perceptual studies involving brain imaging usually stay in darken room for 5 or more minutes prior to recording. And those are subjects coming in a research center, which is not a stressful environment to the extent an airport is. Also, the environment has to be very calm during the recording, otherwise you will pick up attentional artifacts, orienting or even startling responses, eye movements, etc. - all very damaging to EP recording. Furthermore, unless you are using a MEG (magnetoencephalograh), you have to prepare the subjects with electrodes. So, some more time there, along with health hazards. Even electrode caps or bands need to be precisely placed and properly checked for impedance. And then of course, finally, raw data has to be transformed, results have to be analyzed and a decision has to be taken as to the individual being tested - but this last step can be automated for the most part.

If everything goes right, it will be a miracle if you can get more than 4 persons an hour out of the testing booth. Bear this in mind, then look inside any busy airport.And if this isn't enough, think of the personnel needed. One attendant per testing booth, because you can't rely on the subject to fit himself properly with electrodes, even a head band, the error rate would be too high and it opens possibilities of cheating the system (unless, again, if you are using a MEG - but the cost alone would be prohibitive). Time, cost, manpower - the yield just insn't there.

But those practical obstacles are not the main reason why I think that a general use of EP for mass testing is impossible. I think that, eventually, we will be able to use EP or other physiological measurements to test almost any person, on about almost anything. We just will not be able to test everyone, even on a single thing, conclusively. There can't be the equivalent of an intelligence test or an aptitude test for the detection of deception. For the same reason that there cannot be any single set of ordered spoken sentences that a lawyer may use to defend it's clients all the time. Every brain is different and has been subjected to different events, and so mass testing would have to deal with an unmanageable multitude of different events. This is also the reason why I find the expression "brain fingerprinting" so preposterous. In fingerprinting there is one event, a print on a crime scene, that is identical (or near identical) to another event, a print in a database, which in turn is linked to only one individual on the planet. This level of cascading certainty is nowhere to be found in the case of "brain fingerprinting". There you have, multiple brain events, averaged to a single index, linked to many and different stimuli, themselves linked to many and different external events. If it is hard to achieve detection of deception in this way with a specific criminal having committed a specific crime, using specific pictures or words that are heavily contextualised by the experimental situation; it is unachievable with general words or pictures on the general public. The future of CNS measurements in detection of deception is in individual investigative techniques; may be in some distant future it could be acceptable as evidence in court, but mass detection, at best, is a quite somber utopia.

This being said, I am an experimentalist at heart and by trade, and I am pefectly willing to conduct a "well-powered study by a disinterested organization". Our lab has all the necessary talent and equipment, and my schedule will be clearing up a little in January, which leaves time to agree on a protocol and find some funding. Any suggestions as to the latter? :)

Cordially,

Denis

P.-S. Here are some of the works presented this week in Montreal that directly or indirectly concerns detection of deception with EP (see the full program at http://www.wlu.edu/~spr/) :

13. Effects of a mental countermeasure on the outcome of the Guilty Knowledge Test by P300. Minori Sasaki 1 , Shinji Hira 2 , & Takashi Matsuda 1 1 Hiroshima Shudo University, 2 University of East Asia

14. The Pz-recorded P300 is highly accurate and sensitive to a memorial manipulation in an objective laboratory guilty knowledge test. Shinji Hira 1 , Minori Sasaki 2 , Takashi Matsuda 2 , Isato Furumitsu 1 , & John J. Furedy 3 , 1 University of East Asia, 2 Hiroshima Shudo niversity, 3 University of Toronto

16. Streaming effect: Event-related potentials from implicit task within saturated memory flow Denis Belisle & Dominique Lorrain University of Sherbrooke

23. The effect of repetition on deception-related ERP activity Ray Johnson, Jr. & Jack Barnhardt Queens College, City University of New York

26. Differential responses in interrupted cognitive streams. Dominique Lorrain & Denis Belisle University of Sherbrooke

(the following might be more classical polygraph studies)

67. The validity of psychophysiological detection of deception with the Guilty Knowledge Test: A meta-analytic study. Gershon Ben-Shakhar 1 & Eitan Elaad 2 1 The Hebrew University of Jerusalem, 2 Israel National Police

68. The effect of attentional instructions and leakage of relevant information on psychophysiological detection with the Guilty Knowledge Test. Gershon Ben-Shakhar & Vered Amihai Ben-Yaacov The Hebrew University of Jerusalem

--------------------------------------------------------- Denis Bélisle Ph.D. denis@gunt.com

Institut des matériaux et systèmes intelligents Pavillon Marie-Victorin, local D6-1004 Université de Sherbrooke 2500, boul. de l’Université Sherbrooke (Québec) J1K 2R1

Téléphone: (819) 829-7131 FAX : (819) 829-7141


My understanding is that these obstacles can be overcome.

Remember, the test need not be 99.99% perfect. We aren't trying to convict someone.

Even if it's only 90% OK, it will make it 10X harder for a terrorist to get in (we'll catch 9/10).

I'm optimistic that $1B in research grants in this area could lead to some breakthroughs. After all, we have things like CAT scans and MRI's...technologies even as a trained scientist that continue to amaze me.

So there are some people who will give up and say, it's too hard. There are others who will find a way to overcome the obstacles.

I don't believe the obstacles are so difficult that they cannot be effectively addressed in the next 3 years. But I could be wrong.

One thing I do know... If we don't try, we'll never know.


Dr. Belisle,

I also wish to thank you for your email. After the tiring exchange with Mr. Greene, it is delightful to hear from someone who bases his reservations about the technology on the facts. You make many good points, though I respectfully disagree with a few. I embed comments in your email below. We have a separate issue regarding who should be involved in the discussion. You may note that I am not sending this email to Mr. Greene, and I will explain why below. I also ask that no portion of this email be forwarded to Mr. Greene. I realize this may seem like an unusual move for a scientist- someone generally supportive of open discussion and information sharing- but I'll get to that.

> Just a note to let you know that Larry Farwell, although certainly a > pioneer in the application of evoked potentials to the detection of > deception, by no means has the monopoly on this subject. As an > example, this year's annual congress of the Society for Psychological > Research is being held this week in Montreal where a few works on that > topic are being presented - the titles of which you may find at the > end of this message.

This is correct. SPR is a widely respected organization where Dr. Farwell presented much of his research, and many top notch scientists in the field present their work there. I caught Ray Johnson's name in particular; his 1986 paper on the triarchic model of P300 amplitude remains a very influential (though somewhat dated) paper. I don't believe Dr. Farwell makes any claim to being the sole researcher in the field, though he is the only one pursuing commercial development of the technology, so far as I know.

> I have been very interested in that field myself for some years now > and I am currently involved in research using high density recording > of evoked potentials in cognitive studies related to the detection of > deception. Last week I responded privately to Thomas' first article > on the subject, stating broadly my opinion on the subject of CNS > measurements and the detection of deception, and, since there seems to > be a kind of a debate here, I would like to squeeze myself in with > some more thoughts on the matter.

You are welcome in the debate. I would love to discuss any results you have from your high density system in more detail. If you have 128 channels or more, I may be able to suggest some high powered help with data analysis (below).

> Like I said to Thomas, there is no doubt in my mind that the future of > detection of deception lies in CNS measurements. But, at the moment, > there is no way to tell the extent to which it will be usable - but it > will be, and in some ways it already is, quite efficient. I just don't > believe it will ever be ready enough to be accepted as a general > measure of control.

Well, in order to get more involved in this particular point, we need to discuss two things: what degree of accuracy is necessary to make it an effective general measure of deception vs. the overall cost of the system. Steve's point has merit. If we only could detect them with 90% accuracy, or for that matter anything above chance, that is in theory better than nothing. Whether such a system would be worth the trouble is a different issue

> Evoked potentials (EP), or any signal appropriated through the scalp, > are minute in nature and manifest themselves irregularly. Electrically > speaking, the brain is a very noisy place and in order to detect a > clear indication that a few hundreds or thousands neurons just fired > at the same time, you have to be somehow lucky. Science not being to > fond of chance, that is why the usual method used to study EP is to > have a series of trials, at least 40 or 50, often a hundred or more, > those signals are then averaged, hereby eliminating background noise > and erratic behavior of the EP. This is for one target - usually you > need more than one target, some irrelevant ones for example, to be > able to make a comparison.

I do not agree with this point. It is true that the significant majority of papers do utilize signal averaging with large numbers of trials. However, it is possible to do some very useful things with fewer trials. In data I collected for my doctoral research, we can discriminate a P300 to an attended event vs. an ignored event well above chance using averages of four trials. These were not recorded under the best conditions for evoking a P300 (ISI of 500 ms, low stimulus novelty, simple stimuli, 12.5% target probability). The reason why most papers require more events is that they utilize simple pattern recognition approaches such as measures of peak, amplitude, or area. More recently, many papers have come out which use more sophisticated techniques. The 4-event discrimination we have for my thesis is made possible by the use of a neural network, a nonlinear approach which is excellent at recognizing patterns in noise. Ongoing research into new mathematical techniques is showing great promise in reducing the number of trials required for reliable discrimination. We are experimenting with an approach called independent component analysis (ICA). Drs. Makeig and Jung, two researchers in this field at the Salk Institute, published a '99 paper in which they were able to discriminate P300s on a single trial basis. They also could separate out cogntive, motor, and response related components of the P300. I ran some of my data through their Matlab ICA package, which you can get free from their site, and it nicely separated the P300, N2, a couple motor components, and some noise. Single trial discrimination of the P300 is admittedly a bit ambitious to me at this time, and is certainly a holy grail for electrophysiology, but I believe we do not need 40 or 50 trials for a reliable discrimination at present. This is an area of rapid research and I am enthusiastic that the next few years will see significant advances in P300 recognition.

Plenty of ICA stuff is at http://www.cnl.salk.edu/~scott/ Scott Makeig is also looking for high density data to analyze with ICA, and is a veteran with the P300, and is very interested in practical applications of electrophysiology and brain computer interfaces - Dr. Belisle, you may want to check out his page as this could be an easy paper for you both. It would also be very interesting to see whether ICA could improve the approach and counteract noise.

So, all in all, pretending to steadily > achieve conclusiveness with EP protocols in under 10 minutes would be > unrealistic. You will have to add to that time some kind of resting > period for the brain to calm and bring down it's background noise. > Even in controlled lab conditions, a subject that has been exposed to > moderately stimulating circumstances just prior to recording will not > yield good results. Subjects in cognitive or perceptual studies > involving brain imaging usually stay in darken room for 5 or more > minutes prior to recording. And those are subjects coming in a > research center, which is not a stressful environment to the extent an > airport is. Also, the environment has to be very calm during the > recording, otherwise you will pick up attentional artifacts, orienting > or even startling responses, eye movements, etc. - all very damaging > to EP recording. Furthermore, unless you are using a MEG > (magnetoencephalograh), you have to prepare the subjects with > electrodes. So, some more time there, along with health hazards. Even > electrode caps or bands need to be precisely placed and properly > checked for impedance. And then of course, finally, raw data has to be > transformed, results have to be analyzed and a decision has to be > taken as to the individual being tested - but this last step can be > automated for the most part.

I agree with many of these concerns. 10 minutes may be a bit optimistic with current technology. The background noise in an airport, as well as noise from the subjects themselves (which may be higher than obedient students in a very calm environment), are issues. I don't have experience with data under these circumstances so I hesitate to comment further. However, I also believe that ongoing advances in the field show great promise in dealing with these problems. The advances in pattern recognition make it easier and easier to filter out or subtract noise from various sources. Electrode technology, including dry electrodes, is also improving. And the automation to which Dr. Belisle refers is relatively easy - you can automate every stage you need from raw data to a decision. In Farwell and Donchin's 1988 paper on the use of the P300 as a mental prosthesis for those unable to communicate via conventional channels, they had an automated system able to make meaningful discriminations based on the P300 in near realtime. We are close to having this ourselves in C++ on an old 486/66 (we have funding problems too) and are sure that the limiting factor is programming skill and nothing more.

> > If everything goes right, it will be a miracle if you can get more > than 4 persons an hour out of the testing booth. Bear this in mind, > then look inside any busy airport.And if this isn't enough, think of > the personnel needed. One attendant per testing booth, because you > can't rely on the subject to fit himself properly with electrodes, > even a head band, the error rate would be too high and it opens > possibilities of cheating the system (unless, again, if you are using > a MEG - but the cost alone would be prohibitive). Time, cost, manpower > - the yield just insn't there.

This may be true, but on the flip side, I don't think anyone portrayed this as being anything close to a hassle free system. Just today, I heard the Taliban threaten America with more airline hijackings. I have had to fly recently (I presented some of my work at a neuroinformatics conference in Vienna) and the days of convenient flying as we knew it are over. There are a variety of new security measures being considered which are very expensive and slow. The way to think about the added inconvenience and cost of a brainwave based system is to consider what other options are being discussed and ask whether a brainwave system might be simpler, faster, and more accurate.

> But those practical obstacles are not the main reason why I think that > a general use of EP for mass testing is impossible. I think that, > eventually, we will be able to use EP or other physiological > measurements to test almost any person, on about almost anything. We > just will not be able to test everyone, even on a single thing, > conclusively. There can't be the equivalent of an intelligence test or > an aptitude test for the detection of deception. For the same reason > that there cannot be any single set of ordered spoken sentences that a > lawyer may use to defend it's clients all the time. Every brain is > different and has been subjected to different events, and so mass > testing would have to deal with an unmanageable multitude of different > events. This is also the reason why I find the expression "brain > fingerprinting" so preposterous. In fingerprinting there is one event, > a print on a crime scene, that is identical (or near identical) to > another event, a print in a database, which in turn is linked to only > one individual on the planet. This level of cascading certainty is > nowhere to be found in the case of "brain fingerprinting". There you > have, multiple brain events, averaged to a single index, linked to > many and different stimuli, themselves linked to many and different > external events. If it is hard to achieve detection of deception in > this way with a specific criminal having committed a specific crime, > using specific pictures or words that are heavily contextualised by > the experimental situation; it is unachievable with general words or > pictures on the general public. The future of CNS measurements in > detection of deception is in individual investigative techniques; may > be in some distant future it could be acceptable as evidence in court, > but mass detection, at best, is a quite somber utopia. I

 also agree that the approach currently seems like it would be best deployed as a means of testing guilt or innocence in forensic situations. It has already been accepted in court for this purpose. I'd really like to go on but cannot right now, and have a couple important points to wrap up so I will get to them.

> This being said, I am an experimentalist at heart and by trade, and I > am pefectly willing to conduct a "well-powered study by a > disinterested organization". Our lab has all the necessary talent and > equipment, and my schedule will be clearing up a little in January, > which leaves time to agree on a protocol and find some funding. Any > suggestions as to the latter?

Ah, I wondered when funding would get involved! Our lab is also superbly well equipped for this kind of testing, with several P300 experts. To Steve, who has a commercial background, this may initially appear bad as it smacks of competition. In academia, this is good because it screams joint grant application! Dr. Belisle, I would be happy to discuss this further. I have some solid ideas on how to partition work - it seems you have the high denisty setup, and we have some outstanding pattern recognition people involved. We have been trying to get a solid application together spanning a few labs; we may want to discuss this with Drs. Makeig and Jung, as we could then involve ICA. We are also on excellent terms with Gary Cottrell and Robert Hecht-Nielsen, two neural network experts, and both of them are interested in grant applications. This is all stuff we can discuss soon. I am not sure how the international aspect would play out. In any case, I would have to defer discussions on grant issues to my advisor, Jaime Pineda, the lab director and PI. Another funding possibility is to pursue an SBIR or STTR with an outside business. We have been looking in to this as well with a local company called ABM (Advanced Brain Monitoring, www.b-alert.com) but the possibility of an application with Brainwavescience and/or Propel is also there. I had not planned on bringing up funding issues yet, but we received some very bad news today. We had been told by NIH that our latest application would be funded. We got a call today that, due to recent events, some "funding had been reallocated" and they were not funding as many projects as previously expected. Looks like I get to teach for funding while trying to wrap up my thesis.

In response to a comment from Steve: I'm optimistic that $1B in research grants in this area could lead to some breakthroughs. After all, we have things like CAT scans and MRI's...technologies even as a trained scientist that continue to amaze me.

I think everyone would agree that one should never underestimate the power of $1B in research grants. At this point, though, I am not optimistic about any imminent fundamental breakthroughs in human functional imaging technologies which would make any method other than the EEG practical for widespread use. Dr. Belisle correctly points out that an MEG is fairly expensive and complicated. The fMRI is a neuroscientist's wildest fantasy (we are dull people), but it requires a very strong magnetic field (weak ones are 1.5 Tesla) which would not be practical at airports and cannot be used with many people. PET scans require the injection of radioactive material. By "imminent," I mean that none would be ready for airport use within 3 years even with a massive funding surge (at least 5 years without). I am eager to see more about optrodes at the SFN conference next month- those could be a huge development if they can make some significant progess. This does bring up something I had not thought of before. While the use of an fMRI is not practical for widepread use, it could be used in forensic cases. It would not at all be too much of a hassle to spend a couple hours in the magnet if it could provide key evidence in a capital murder case or identify an already suspected terrorist. I believe that Larry Squire and Paul Reber found different fMRI activation for false vs. real memories. It would be interesting to explore how fMRIs could provide additional data, especially in cases in which EEG measures are indeterminate, or expand the types of questions that could be asked. For example, it could be a means of detecting procedural memory by exploring the activation evoked in motor areas when viewing an image. (This gets back to my comment on the FAQ, in which I was thinking of using the ERD/ERS of the EEG to look at procedural memory.) Do you guys have comments on this? The cost of 2 hours in the magnet (and you can do quite a lot in 2 hours) is fairly easy to estimate. Magnet time here costs $185 an hour, $225 at a 4 Tesla magnet. The advantage of a stronger magnet is that you improve the ratio of resolution to time. You need 2 technicians, who cost maybe $60 an hour each with their insurance and overhead, or you can use grad students, who are free but screw up a lot. Data analysis could be 10 minutes or less with a well defined paradigm and substantial automation which is not unreasonable. You also have the cost of transporting the accused to and from a hospital. Less than $1000, and in my opinion there is good reason to believe it could provide useful info the EEG could not. I think this is a worthy research direction.

Regarding Mr. Greene, and why I'd prefer to keep him out of the loop:

Mr. Greene's original article contained some factual errors, as well as many errors of omission. If he wants to express concern about the technology and its application, that's fine. But he persistently clings to his misstatements. I pointed these out to him, clearly and repeatedly, and he still has not edited his article or made any correction. Here are only two examples:

1) In Mr. Greene's article in The Register, he states, "He neglects to mention that the FBI trial involved a scant twenty-one test subjects, however -- hardly the makings of anything approaching statistical significance." I assume that Mr. Greene is referring to the 1993 study from Dr. Farwell and Dr. Richardson which is on the Brainwavescience page. Mr. Greene's statement is not correct. That study was statistically significant. "Statistically significant" means that the study had a p value of less than .05. "Highly significant"means the p value was less than .01 (which is was). There are many other problems with his statement - he is isolating one 8 year old study and neglects to mention the many other studies, plus he says that 21 subjects is scant in psychophysiology research.

2) Mr. Greene, writing a reply to his article under his alias as DCVULTURE, wrote, "He also has no data on uncooperative subjects -- he's assuming that you can't decline to cooperate if you agree to participate. i'd be a bit more confident if we had a little empirical data on that."

Dr. Farwell and Dr. Donchin published a paper in Psychophysiology in 1991 which did study uncooperative subjects. Several studies since then looked at uncooperative subjects and shown that the effect remains.

I noted both of these mistakes in a letter to the editor of the Register which I sent to Mr Greene, and to which Mr. Greene replied. I made it clear that both of these points could be objectively verified through the links on the Brainwavescience page. I sent Mr. Greene another email later reiterating his factual errors and suggesting that he edit his article. He has wrote numerous emails to me and others, and he wrote a follow up to his article, but he has completely ignored these flat out incorrect statements. For these and other reasons (which I can spell out if asked, but it's late), I don't believe that Mr. Greene is genuinely interested in having an honest, informed, structured discussion on the subject. I think he is much more interested in getting attention. He writes for The Register, which does not seem to be a very serious magazine. When he first published his incorrect statements, that might have been an honest mistake. However, the fact that he completely dodges the issue while continuing the debate means it can no longer be considered an honest mistake.

Dr. Belisle, I thank you again for your email. I hope you will be attending the Society for Neuroscience conference next month in San Diego, when the mean daytime temperature will be about 70 degrees. I am presenting a poster on Wednesday, and would be happy to meet with you and talk further. I look forward to comments. Cheers, -Brendan