Issues in Brain Imaging- Reading Your Mind

Over time, the precision and detail of neuroimaging technologies have improved. More and more, concerns have been raised as to what is being shown in the images created through these technologies. Can these images be used to read minds? What about predicting behavior? Can they know that someone is lying and what the truth really is just from looking at these images?

In recent years, the technologies concerned have made huge leaps in ability, and these concerns are starting to have some merit. Advances have begun to make it possible to detect deception, turning the fMRI into an advanced form of lie detector (Simpson, 2008). But other, more controversial developments have also occured. It has become increasingly possible to analyze and even predict complex human behaviors (Illes, 2010).

The existence of these technologies is not in itself controversial, but the use to which the information will be put is a huge ethical concern. Neuroimages have already been used in court cases (in one case the LACK of neuroimages caused a homicide conviction to be overturned; Racine, Bar-Ilan, & Illes, 2006), and consumer preference studies have even provided fMRI data that is being used to influence marketing strategies, creating a new marketing field called “neuromarketing” (Racine, et al, 2006).

Page by Nick Howard


Illes, J. (2010). Advanced neuroimaging: Ethical, legal and social issues. Retrieved                   from

MrCristea (2009, January 9). Mind reading- fMRI- machine that reads your thoughts-           60 Minutes [Video File]. Retrieved from:

Racine, E., Bar-Ilan, O., & Illes, J. (2006). Brain imaging: A decade of                                          coverage in the print media. Science Communication, 28, 122-143.                                           doi: 10.1177/1075547006291990

Simpson, J. (2008). Functional MRI lie detection: Too good to be true? Journal of                     the American Acadamy of Psychiatry and the Law Online, 36(4).                                         Retrieved from

Leave a Reply