John Searle

John Searle
John Rogers Searleis an American philosopher and currently the Slusser Professor of Philosophy at the University of California, Berkeley. Widely noted for his contributions to the philosophy of language, philosophy of mind and social philosophy, he began teaching at Berkeley in 1959. He received the Jean Nicod Prize in 2000; the National Humanities Medal in 2004; and the Mind & Brain Prize in 2006. Among his notable concepts is the "Chinese room" argument against "strong" artificial intelligence...
NationalityAmerican
ProfessionPhilosopher
Date of Birth1 December 1932
CountryUnited States of America
Berkeley became a left-wing community as a result of the '60s. The university is about the same as it was before politically. But the city is much more to the left than it was prior to all of this.
I want to block some common misunderstandings about "understanding": In many of these discussions one finds a lot of fancy footwork about the word "understanding.
There are clear cases in which 'understanding' literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.
There is no success or failure in Nature.
The problem posed by indirect speech acts is the problem of how it is possible for the speaker to say one thing and mean that but also to mean something else.
Akrasia" [weakness of will] in rational beings is as common as wine in France.
There are clear cases in which understanding literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.
Berkeley had a liberal element in the student body who tended to be quite active. I think that's in general a feature of intellectually active places.
Where questions of style and exposition are concerned I try to follow a simple maxim: if you can't say it clearly you don't understand it yourself
Where conscious subjectivity is concerned, there is no distinction between the observation and the thing observed
I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing
In many cases it is a matter for decision and not a simple matter of fact whether x understands y; and so on
Our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples
We often attribute 'understanding' and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions.