The Mind Network is a community of UK researchers in philosophy of mind and cognitive science. We meet twice a year. We present papers, exchange ideas, and generally have a good time. Graduate students and new arrivals to the UK community are particularly welcome. Subscribe to our mailing list below to join us.Next meeting
University of Warwick
Central European University
University of Birmingham
University of York
King's College London
University of Birmingham
University of Edinburgh
We do not share your email address with anyone. You can unsubscribe with one click.
Lisa Bortolotti questioned how autonomy is affected in people who suffer from delusions and confabulations. Do people affected by delusions and confabulations have the capacity to consent to treatment? Should they be allowed to make decisions that affect their well-being? Lisa argued that autonomy should be understood as self-governance, and made a distinction between (a) the capacity to govern oneself, and (b) whether one is successful at governing oneself. The capacity for self-governance depends on the capacity to develop a self-narrative. Being successful at, rather than just merely having the capacity for, self-governance is determined by the coherence of self-narratives and their correspondence to real life events. Lisa claimed that, in most cases, people with delusions or confabulations have the capacity for self-governance, but fail to govern themselves successfully. This is because they have failures of rationality and self-knowledge that impact on the coherence of their self-narratives and the correspondence between those narratives and real life events.
Andy Clark discussed a brand new theoretical framework in psychology: predictive coding. According to the predictive coding hypothesis, essentially we are prediction machines. The purpose of much of our neural machinery is to predict what happens next. The predictive coding hypothesis applies to all aspects of our mental life from perception to cognition to action. The predictive engine is claimed to be implemented in a neural hierarchy, with each layer in the hierarchy predicting the output of the layer below. ‘Back’ neural projections carry the predictions downwards to more peripheral neural systems and ‘forward’ projections carry the error signals upwards to more central systems, inverting the traditional neural functional hierarchy. Andy Clark carefully analysed the promises of the predictive coding framework and challenges that it faces. Andy introduces predictive coding on the Edge Foundation website: What scientific concept would improve everybody’s cognitive toolkit?
Philip Gerrans discussed the neural basis of delusions. At present there are a number of candidates for neural correlates of delusion: abnormalities of dopamine regulation, failure to regulate ventromedial and dorsolateral processing, and right lateral hypofrontality. However, we do not understand the role of these neural properties in producing delusion because we do not know how to transform correlation to causal explanation. Philip proposed a new theoretical definition of delusion: delusion is the monopoly of mental time travel by hypersalient experiences. Mental time travel involves the integration of autobiographical memory and imagination in decision making. Recent evidence suggests that unsupervised mental time travel is the default mode of human cognition, and that it is distinct from the mode of decontextualised cognitive processing. A salient experience is an experience that attracts cognitive processing resources. Philip argued that his theoretical definition accurately captures the cognitive architecture that produces delusional psychology and phenomenology. He claimed that the account also has the virtue that it allows us to move from correlation to causal explanation by showing how mechanisms at different levels of the subject, from molecular to the personal level, stand in relations of mutual manipulability.
Matt Soteriou aimed to resolve the disagreement between Michael Bratman’s planning theory of intention with David Velleman’s epistemic account of intention. Matt argued that the right account of intention should draw elements from both approaches. His suggestion was that the apparent disagreement between these approaches can be reconciled by appeal to a common notion: a notion of self-governance. Self-governance provides the crucial connection between the mental actions of practical deliberation and planning (which feature in Bratman’s planning account) and the kind of practical self-knowledge that intention can embody and that one’s actions can realise (which feature in Velleman’s epistemic account).
Certain ‘imagistic’ representations - e.g. sensory mental images, many pictures - stand in a special relationship to our sensory powers. The first part of the talk will present an account of the nature of the ‘distinctively sensory’ contents possessed by all of these representations, one that seeks to explain the manner in which their contents depend upon forms of sensory experience. The second part of the talk will then explore the possibility that the dependency relations also sometimes go in the other direction, by suggesting that the contents of certain expectations that have been thought to be crucial to our ability to see items as externally located are distinctively sensory ones.
Maja Spener brought the notion of ‘Good Visual Experiences’ and abilities, to bear on the debate between experiential monists and experiential pluralists. Good Visual Experiences are those which ‘figure in seeing the world aright’, we might think of them as consciously presenting the world the way it is, and as being world-involving. Spener suggested that if there are visual experiences which are world-involving, Experiential Pluralism follows. She then mounted an argument for Experiential Pluralism on the grounds that the possession of some of our situation-dependent abilities is explained by appeal to good visual experiences.
This talk will look at an underdiscussed challenge to Radical Interpretation (construed as a metaphysical story about the foundations of intentionality). The challenge is mentioned in passing in Lewis’s “New work for a theory of universals” and recently re-presented by Brian Weatherson. The upshot threatens to be this: if two possibilities are evidentially and agentially the same for a subject, then the subject cannot represent the difference between the two. But many pairs of possibilities we do distinguish may satisfy the antecedent: sceptical and non-sceptical scenarios, roughly. This is a kind of Berkleyian representational scepticism. I look at responses available to the radical interpreter, and argue that many are mere verbal victories, and don’t address the underlying sceptical challenge.
What is it for several agents to intentionally act together? Put differently, what is it for them to have a “shared intention”? According to reductionist accounts, intentional joint action can be understood by exclusive appeal to conceptual resources that are needed anyway for understanding singular action in a social context. In this talk, I will discuss epistemic/doxastic conditions on intentional joint action given such a reductionist approach. Most reductionist accounts include a condition that it must be common knowledge between participants that they have certain intentions and beliefs that causes and coordinates their joint action. By rejecting three arguments that could potentially support such a condition, I argue that reductionists should get rid of the condition as a condition on intentional joint action as such. On the other hand, many reductionist accounts lack a condition which ensures that each participant believes that his or her intended end is a single end intended by each. Without such a doxastic single end condition, the accounts fail to distinguish intentional joint action from mutual exploitation and unintentional joint action.
According to Sterelny (2010), cognition is deeply environmentally supported or “scaffolded”. In my talk I will illustrate various ways in which not only cognition, but affectivity as well is scaffolded. I will focus in particular on its material scaffolds, and show how they can come to be “incorporated” into our affective episodes. In doing so I will draw on existing phenomenological accounts of incorporation into the sensorimotor domain (as in the case of the blind person’s cane), and argue that material items can be incorporated also in the affective domain, in a variety of ways.
According to Bayesian approaches to perception, presence of bias optimizes perception. This raises a question about the status of perception of the unexpected. Perception of the unexpected occur when we encounter novel or atypical events. Because this form of perception is a result of invalid expectations, it might be treated as suboptimal: it decreases accuracy and amplifies uncertainty. I argue that we need to rethink the notion of optimality for experiences of unexpected. Focusing on two forms of perception of the unexpected – experiences of change (noticing a new building on the way to work) and of absence (seeing an elephant vanish in a circus trick) – I show that both can be understood as involving optimal decisions. I then explain why optimization is harder to achieve for perception of absence than it is for the perception of change.
Traditional eliminativism is the view that a term should be eliminated from everyday speech due to failures of reference. Following Edouard Machery, we may distinguish this traditional eliminativism about a kind and its term from a scientific eliminativism according to which a term should be eliminated from scientific discourse due to a lack of referential utility. The distinction matters if any terms are rightly retained for daily life despite being rightly eliminated from scientific inquiry. In this article, I argue that while scientific eliminativism for pain may be plausible, traditional eliminativism for pain is not. I discuss the pain eliminativisms offered by Daniel Dennett and Valerie Hardcastle and argue that both theorists, at best, provide support for scientific eliminativism for pain, but leave the folk-psychological notion of pain unscathed. One might, however, think that scientific eliminativism itself entails traditional eliminativism—for pain and any other kind and corresponding term. I argue that this is not the case. Scientific eliminativism for pain does not entail traditional eliminativism about anything.
There are familiar cases of recalcitrant emotions, e.g. believing that flying is safe, but fearing it nevertheless. In much of the literature, recalcitrant emotions are wheeled in as puzzling cases for a competing view, though very often the data itself ends up driven by the very theories under question. We believe that there has been insufficient theory-neutral discussion of recalcitrant emotions and that, in fact, it is less than obvious that there is a theory neural puzzle that needs to be dealt with. So, we think, it is worthwhile to look at the data from a theory-neutral perspective. It is worthwhile both in its own right and also because of the light it sheds on the landscape of the emotions. We each offer our own upshot on the basis of this assessment. One of us (Majeed) argues that the recalcitrant nature of recalcitrant emotions has been exaggerated in that they are receptive to top-down cognitive influences. The other (Grzankowski) argues that a richer landscape with respect to recalcitrance also gives way to a richer landscape for theorizing about the intentionality and normative assessability of the emotions which has too often been couched in terms of propositional contents. We end by exploring the consequences of these for the various ways emotion theorists handle recalcitrant emotions.
I argue that both experiences and reality can be a great deal more ‘sparse’ than you might initially believe. There can be experiences that are determinately phenomenally warm-colored, but not any particular warm shade; there can be experiences of objects standing in spatial relations to one another, but not any particular spatial relations; there can be experiences of triangles, that aren’t determinately equilateral, isosceles, or scalene, for the relationships between the lengths of sides and angles are ‘left open’. Further, for each such ‘sparse’ experience, there is a corresponding possible world. There are possible worlds in which objects stand in spatial relations to one another, but not any particular spatial relations — e.g. in which one object is determinately above another, but where their horizontal positions are left open. There are possible worlds in which there are triangles that are not determinately equilateral, isosceles, or scalene.
Photograph taken by Pomdu
This talk provides a starting point for psychological research on the sense of commitment within the context of joint action. I begin by formulating three desiderata: to illuminate the motivational factors that lead agents to feel and act committed, to pick out the cognitive processes and situational factors that lead agents to sense that implicit commitments are in place, and to illuminate the development of an understanding of commitment in ontogeny. In order to satisfy these three desiderata, I specify a minimal framework, the core of which is an analysis of the minimal structure of situations which can elicit a sense of commitment. I then propose a way of conceptualizing and operationalizing the sense of commitment, and discuss cognitive and motivational processes which may underpin the sense of commitment. Finally, I present results from ongoing experiments testing hypotheses generated by the framework.
To say that an action is done out of emotion implies that it can be explained by the agent’s emotional evaluation of the situation – as offensive or threatening, say. How should we understand this explanation? Does the agent’s emotional evaluation supply one of the reasons on which they act? The question turns on how the evaluation generates the action. Here, I focus on a particular kind of emotional action: voluntary actions aimed at dealing with the situation. (Examples might include damaging something out of anger or checking one’s will out of anxiety.) I argue that, even for this restricted class of emotional actions, we cannot give a single account: two models are needed. Moreover, while one model suggests that emotional evaluations do provide reasons for action, the other suggests that they do not. Hence, the question has no simple answer: it depends on the details of each case.
An agent generally knows what she is intentionally doing, and generally experiences herself in various ways as the agent of – the thing actively directing – these intentional actions. In this talk I will reflect upon the relationship between these two phenomena – the knowledge of action and the experience of agency. First, I ask what it is about knowledge of action that has intrigued so many philosophers. I focus on Elizabeth Anscombe’s influential view on which knowledge of action is practical (or active), independent from observation, and located within a sphere over which the agent has unique, first-personal authority. Reflection on how best to understand these features of knowledge of action generates explanatory difficulties that seem to remain in spite of the philosophical effort devoted to their resolution. After isolating certain difficulties, I turn from work on knowledge of action to Tyler Burge’s influential view on knowledge of one’s own attitudes. I elucidate Burge’s argument in order to ask whether a version of it could apply to knowledge of action. Drawing on recent work on the cognitive architecture of action control, and drawing as well on Brian O’Shaughnessy’s work on the rational structure of the stream of consciousness during action, I argue in the affirmative. The view that emerges, I claim, captures the senses in which knowledge of action is practical (or active), independent from observation, and located within a sphere over which the agent has unique, first-personal authority.
Photograph taken by Eric Kilby
What makes the sciences of the mental unified? Allen Newell (Newell 1973, 1990) famously proposed to unify the research by appealing to cognitive architectures. Cognitive architectures are structures whose function is to display phenomena studied by psychologists, be it abstract problem solving, limitations of short-term memory, or temporal patterns of responses to stimuli in experiments. They have become a major approach to modeling the mind as a unitary phenomenon in cognitive science, for example in ACT-R (Anderson 2007), and they have remained immensely important in cognitive neuroscience. Instead of building minimal micro-models of particular psychological tasks, researchers can appeal to unified cognitive architectures whose structure is supposed to be biologically plausible.
But are contemporary cognitive architectures really unified? Do they really bring about a unified theory of the phenomena in question, or just a motley of individual results collected in the single computational simulation? Maybe cognitive architectures are just a misnomer, and these are rather cognitive slums filled with temporary constructs. One easy reply, along the lines of massive modularity of mind, would be that minds are exactly that: motley collections of cognitive features that might look like junk from a distance.
My approach is however different. The question is: what renders a model of mind unified and integrated? Is there a way to produce a unified cognitive architecture, and not just an integrated one? I will insist that explanatory unification is the process of developing general, simple, elegant, and beautiful explanations; while explanatory integration is the process of combining multiple explanations in a coherent manner. My argument is that researchers have been busy with integrating individual results, and not with unifying the model. But this is not necessarily a bad thing.
I propose a new account of the nature of implicit bias, according to which implicit biases are unconscious imaginings. I begin by introducing implicit bias in terms congenial to what most—if not all—philosophers and psychologists have said about its nature in the literature so far. I then ask what we are looking for in an account of implicit bias, so as to lay out the desiderata to be met by my account, which then frame the discussion of it. Next I lay out my proposed account and the explanatory work it can do. I close by outlining and responding to some potential objections to my account, before concluding that the thesis that implicit biases are unconscious imaginings ought to be taken seriously.
According to a popular view in contemporary epistemology, the correct application of one’s cognitive abilities in believing truly, or in the process of coming to believe truly, is necessary and sufficient for a certain kind of credit that is, in turn, necessary for knowledge. By and large, epistemologists who think that cognitive abilities perform this kind of fundamental epistemic role take the cognitive abilities concerned to be based in various states and processes that are spatially located inside the head of the knowing subject. Enter the hypothesis of extended cognition (henceforth ExC). According to ExC, the physical machinery of mind sometimes extends beyond the skull and skin. In this talk, I shall explore what happens when the credit condition on knowledge is brought into contact with ExC. Via discussions of (a) empirical psychological work on the adaptive character of technologically augmented memory, (b) some famous and not-so-famous thought experiments from the extended cognition and extended knowledge literatures, and © philosophical work on what is required for a subject to own her cognitive states and processes, conclusions will be drawn both for ‘knowledge in the wild’ and for ExC.
Photograph taken by Naomi Racz
Mindreading is the ability to ascribe mental states to others. It’s widely held that attempts to detect mindreading in animals face a vicious problem known as the ‘logical problem’ - according to which empirical methods currently used to detect mindreading cannot, in principle, detect it. I argue that the situation is, in a way, worse than this. There are two, non-equivalent conceptions of mindreading at work in mindreading research. As a result, mindreading research faces not one logical problem, but two. Fortunately, this doubling of logical problems is not doubly problematic. Only one of the logical problems should trouble us and this one, I argue, can be solved.
Perceptual constancies, such as we encounter in our visual experience of shape, size, and colour, are among the most significant yet perplexing aspects of perception. Colour constancy is widely taken to involve some invariance in our perception of objects’ monadic colour properties – properties such as red23 and green17 – under changes in illumination. In contrast, an important yet neglected cluster of empirical theories focuses on perceived constancies in the colour relations borne between objects in the scene (Craven & Foster 1992). Such relational theories neatly explain some recalcitrant data, but present philosophical puzzles concerning the supposed phenomenology and content of relational constancy phenomena. I take a closer look at these puzzles and propose a resolution. The ensuing account has wider implications. For one, it undermines the standard monadic determination view of relational colour perception, on which the colour relations that we perceive as holding between two objects are determined by the monadic colours that we perceive those objects as having. In addition, the account implies a revisionary view of the role of colour vision in our perception of objectual form.
Many debates in philosophy of mind focus on whether folk or scientific psychological notions pick out cognitive natural kinds. Examples include memory and emotions. A potentially interesting kind of kind is: kinds of mental representations (as opposed, for example, to kinds of psychological faculties). My talk will focus on how kinds of representations are identified. In psychology kind identification is often based on the presence of signature effects. Signature effects are causal-functional roles that reveal both the properties of the underlying representational vehicles and what they refer to. I oppose this way of discovering representational kinds to other existing strategies: via the semantic content of representations, via their evolutionary history, via their implementation (in the brain).
Photograph taken by Giuseppe Milo
In March 2016, Google DeepMind’s computer programme AlphaGo surprised the world by defeating the world-champion Go player, Lee Sedol. Go is a strategic game with a vast search space (including many more legal positions than atoms in the observable universe), which humans have been playing and studying for over 3000 years. Watching the tournament, the Go community was struck by AlphaGo’s moves—they were surprising, original, “beautiful”, and extremely effective. The moves were described as “creative” by the Go community and in follow-up talks on the subject, Demis Hassabis—leading AI developer and CEO of Google DeepMind—defended them as such. Should we understand AlphaGo as exhibiting human-like insight? Answering this question requires having an account of what constitutes insightful thought in humans and developing tests for measuring this ability in nonhuman systems.
In this talk, I draw on research in cognitive psychology to evaluate contemporary progress in AI, specifically whether new programs such as AlphaGo are best understood as exhibiting insight. Recent cognitive accounts of insight emphasise the importance of mental models (e.g., general causal models of the physical world) for generating insightful behaviour. Such models allow individuals to solve problems and make predictions in situations they have never encountered before. How do we determine whether and when new artificial agents are capable of employing such models? Here insights from comparative psychology can help. Over the last 40 years, comparative psychologists have been developing tests for identifying the use of mental models in nonhuman organisms. The application of such tests to AI may help us not only interpret Deep Neural Networks, but suggest ways in which the technology might be improved.
The traditional problem of surveillance or privacy concerns personal data and behaviour – and it is believed that what we humans think, feel, desire and plan must be private because access to these cognitive processes is practically impossible, or even impossible in principle. We argue that current technical developments in live brain-imaging, EEG, brain implants and other brain-computer interfaces (BCIs) make it practically possible to detect data from the brain, analyse that data and extract significant cognitive content – including content that is not accessible to the subject themselves. Though all current techniques require close proximity, they do not require a conscious or collaborative subject. We conclude that neurosurveillance is a real, current, threat to privacy. - These considerations have relevance for two traditional issues: a) the alleged epistemic inaccessibility of phenomenal content in ‘other minds’ and b) the relevance of the philosophy of mind for empirical questions, generally.
Recent progress in artificial intelligence sparks new interest in an old philosophical question: Can machines think? In this talk I will consider the use of Machine Learning (ML) methods to develop intelligent thinking machines. Two criteria will be considered: behavioral indistinguishability and procedural (or algorithmic) similarity. It seems probable that ML methods will eventually yield computers that satisfy the former. But what about the latter? The inner workings of ML-programmed computers such as deep neural networks and reinforcement learning agents may be no easier to understand than those of human cognizers. Thus, I will review empirical methods for addressing this ‘Black Box Problem’, e.g. experimental techniques and methods of mathematical analysis. I will also consider a priori reasons for thinking that ML-programmed computers will not only become behaviorally indistinguishable from humans, but that they will also exhibit a degree of procedural similarity. Because these computers are nurtured and situated in the real-world environment that is also inhabited by humans, the methods they will acquire in order to engage that environment are likely to mirror our own.
Cosmic hermeneutics is the claim that one can, using only armchair reasoning, deduce all the truths about the world from a limited set of low-level truths. For example, a supporter of cosmic hermeneutics claims that one can deduce all the facts about conscious experience from the truths of microphysics. The possibility of cosmic hermeneutics is denied by most contemporary physicalists. Physicalists typically insist that the connection between physical and phenomenal truths is only discoverable a posteriori. In contrast, Derek argued that cosmic hermeneutics, and therefore an a priori deduction, is possible. He argued that comic hermeneutics is possible if a speaker adopts the right language. A special language in which to describe the low-level truths would enable its speakers to perform the relevant deductions purely in virtue of their linguistic competence. Derek concluded that this result does not threaten physicalism.
Louise Richardson’s paper focused on the relationship between scientific work on the senses and our everyday thought and talk about them. She approached this subject via a puzzle about flavour perception: what does holding your nose when trying some fruit-flavoured sweets do? We usually think that flavours are just tasted; does holding your nose impair your ability to taste the sweets? Or does it prevent smell from playing its usual role in flavour perception? Louise argued that findings in the psychology of flavour perception don’t settle this question. To determine whether such findings show that we’re wrong to think of flavours as just tasted, we need to know what we commit ourselves to when we think this. Louise argued that it’s far from clear that we’re committed to anything that the scientific findings show to be false.
Nick Treanor’s paper functioned as an introduction to a larger project that sits at the intersection of philosophy of mind, metaphysics and epistemology. The central problem is how to understand what it is to know more, or what it is to ameliorate ignorance. This raises issues in philosophy of mind concerning the nature of belief: whether beliefs are properly understood as individuals (and hence as countable). It also raises issues in metaphysics: at the heart of the problem is the question of what the world is like such that more of it can be known. It also raises issues in epistemology: how we understand what it is to know more or what it is to ameliorate ignorance will have consequences for our understanding of epistemic normativity and the aim of belief. Nick’s paper focused on developing the problematic and exploring its shape and character.
Photograph taken by Rwenland
Under experimental conditions, behavior suggesting dual intentional agency is easily elicited from split-brain subjects who nonetheless usually behave in a unified fashion. This paper presents a model of split-brain agency to account for this apparent tension. Right and left hemisphere are associated with distinct intentional agents, R and L, each of whose unity is grounded in inferential relations that its reasons and intentions bear to each other. These same relations do not hold interhemispherically; rather, unified behavior is largely the result of a split-brain subject’s having a single body, much of whose functional integrity in action is maintained by forces operating downstream of reasoning and intention-formation. R’s and L’s intentions still both belong to the same superordinate agent, however, for they bear, with respect to one and the same body, those special causal powers that my intentions (and mine alone) bear to my body (and to my body alone).
Mark Sprevak’s paper examined a type of argument that has been used to both criticise, and justify, the hypothesis of extended cognition (HEC). HEC claims that human cognitive processes can, and often do, extend outside our head to include objects in the environment. HEC has been variously criticised, and justified by, appeal to inference to the best explanation (IBE). Advocates and critics of HEC claim that we can infer the truth value of HEC based on whether HEC makes a positive or negative explanatory contribution to cognitive science. If assuming HEC makes a positive explanatory contribution to cognitive science, we should infer HEC’s truth. If assuming HEC makes a negative explanatory contribution, we should infer its falsity. Mark Sprevak argued that this general strategy, shared by both advocates and critics of HEC, does not work. The reason is the existence of a rival hypothesis to HEC with a differing truth value, but negligible difference in explanatory value. The existence of this explanatory rival invalidates both IBEs for the truth, and the falsity, of HEC. Explanatory value to cognitive science is simply not a guide to the truth value of HEC.
It’s often observed that perceptual experience is transparent—we ‘see through’ the qualities of experience to the objects of experience. In recent times, such observations have been invoked to support support various claims about perceptual experience: the existence of qualia, representationalism, and naive realism. Dave argued that the phenomenon of transparency can also be invoked to support ‘strong enactivism’—the thesis that capacities for experience and for agency are essentially interdependent. If strong enactivism were true, then we would expect to find a phenomenon that parallels perceptual transparency in agency. Moreover, our constitutive explanation of perceptual transparency should make essential reference to agency, and the constitutive explanation of the parallel phenomenon for agency should make essential reference to perception. Dave argued that there is indeed a parallel phenomenon of transparency in the domain of action, and that the two instances of transparency should be given parallel constitutive explanations of the form the strong enactivist requires. A consequence is that we should think of the transparency of perceptual experience as something that is achieved, not given.
I have argued that perceptual experience is a (peculiar) kind of belief. The doxastic account of experience I suggest further construes the contents of experience as ‘phenomenal’. Visual phenomenal contents, for instance, are contents of the form ‘x looks F’ or ‘It looks as if Fx’, where F is suitably sensible and ‘looks’ is construed phenomenally. Such contents ascribe ‘phenomenal properties’ to ordinary material objects. In this talk, I shall investigate two connected worries about phenomenal contents. The first worry is that experiential contents are not ‘looks-indexed’, i.e. that the way things look does not determine any content for the relevant experience (Travis 2004). Prima facie, this would seem as much of a problem for an account of experience working with phenomenal contents as for any other account according to which perceptual experience has representational content. I shall argue that this is false; properly construed, the way things (phenomenally) look does determine experience content—if this content is construed phenomenally. The second worry is that phenomenal properties might nevertheless not be suitable for experiential content (Chalmers 2004, Brogaard 2010): Phenomenal properties are patently subject-relative. Experiential contents involving such properties cannot even be shared across phenomenal duplicates. Moreover, the complexity of such properties falsifies the phenomenology of experience. I shall argue that these objections lose their force if we construe the subject-relativity of the phenomenal properties represented in experience on the model of unarticulated constituents.
Much of the discussion of Naive Realism about veridical experience has focused on a consequence of adopting it—namely, disjunctivism. However, the motivations for being a Naive Realist in the first place have received relatively little attention in the literature. In the first part of the paper, I will criticise the arguments for Naive Realism offered by M.G.F. Martin, John Campbell, and (some exegetes of) John McDowell. In the second part, I will elaborate and defend a motivation lurking in the work of Mark Johnston and made explicit by William Fish, to the effect that Naive Realism dissolves at least one of the “hard problems” of consciousness.
I discuss perceptual accounts of mindreading, according to which some of our knowledge of others’ mental states is perceptual knowledge. I argue that views such as that set out by Dretske and Cassam cannot respect a plausible phenomenological constraint on such accounts. I suggest that we need to pursue a view that incorporates certain claims about the way people look. The view proposed relies on a distinction between basic and non-basic looks.
Photograph taken by Craig Cormack
This paper attempts to draw some lessons about the nature of belief from considerations concerning beliefs’ ‘bedfellows’: states that are not paradigmatic beliefs but are belief-like in certain important respects. I examine the merits of various proposals about how to categorize such states, before turning to the question of what such states might be able to teach us concerning the nature of belief and the propositional attitudes more generally.
Consciousness and cognition are two difficult topics in the study of mind; the various relations between them are even more daunting. In recent years, Ned Block (1995, 2007, 2011, etc.) has been pushing the view that the content of consciousness is, in general, richer than the content of cognition. This view OVERFLOW for short is controversial and has important implications for both philosophical and empirical issues. The present paper develops a version of this view and discusses relevant ramifications. Section 1 offers some preliminaries, and section 2 introduces the latest version of Block’s view based on the famous Sperling paradigm. Section 3 explains why parts of Block’s view are implausible, and elaborates a weaker version of OVERFLOW. Section 4 discusses further issues. First, I discuss Ian Phillip’s postdiction interpretation (2011) and explain why it is compatible with OVERFLOW. Secondly, I reply to a potential objection from Block that my view cannot accommodate a series of experiments conducted by Victor Lamme’s lab (e.g., 2003). Last, I connect the present discussion to another important literature the multiple-object-tracking discussion in psychology. The general moral is that varieties of attention and visual indexes can explain different levels of visual experiences.
In this paper I examine a recent debate in the philosophy of mind concerning the existence of ‘cognitive phenomenology’ (CP) by focusing on the arguments of Charles Siewert and Jesse Prinz. I argue that although Prinz’s account adequately explains away the cases Siewert presents in terms of non-cognitive phenomenology, there are still Siewert-inspired cases that elude Prinz-style explanation, and thus support the existence of a generic, course-grained sort of understanding experience. I suggest that these considerations support what I call the Weak, but not the Strong, CP-Existence thesis.
Contemporary consciousness science is messy. A wide range of behavioral and neurophysiological measures have been proposed as ‘good’ markers of the presence of consciousness, but so far little progress has been made on building a tested taxonomy of kinds and measures of consciousness. Seth et al. (2008) suggest that an integrative, comparative approach is likely to help. By comparing measures within the same experimental paradigm, we can identify the range of types of consciousness being measured, and the most sensitive or reliable measure of them. Similarly, Shea and Bayne (2010) suggest that a cluster of highly correlated measures can be used infer the existence of a scientific kind of consciousness that they all measure. These methodological approaches are common, productive and reasonably straightforward ways to deal with messy research. However, I suggest that they are unlikely to work in the standard way in consciousness science. By appealing to criteria for identifying successful cases of clustering, and recent experimental work, I suggest that describing the kinds that underlie these clusters as kinds of consciousness is methodologically unwarranted. While this is a small part of a bigger picture, some implications on the general status of consciousness science can then be outlined.
In sensory experience we are presented with certain phenomenal qualities such that there is something that it’s like to undergo that experience. Representationalism about phenomenal qualities is the claim that for a sensory experience to have a particular phenomenal quality is a matter of it having a particular representational content. This view is not only widespread in recent philosophy of mind, but moreover, characterizing sensory phenomenology in terms of representational content is a ubiquitous practice in psychology and neuroscience. In this paper, I propose an original representationalist approach to phenomenal qualities that stands in direct contrast to the version of theory which is most dominant in the philosophical literature externalist representationalism. However, it should be noted that (solely due to considerations of space) I do not attempt to provide any direct arguments against externalist representationalism in this paper. Rather, my primary aim here is to describe the outlines of an internalist version of representationalism; specifically, one that is congruent with and informed by the general empirical framework developed and employed by psychology and neuroscience.
Peter Carruthers (2009, 2010, 2011) has made a powerful empirically supported case for the view that we know our own propositional attitudes (PA) only via self-interpretation. Carruthers (2011, 2012) claims that the conjunction of this view of self-knowledge and the global workspace theory of consciousness implies that there are no conscious propositional attitudes. I shall argue that this conjunction of theories doesn’t preclude the existence of conscious PAs. More generally, on the global workspace account, there could be conscious states, namely PAs, that are known to the subject only via self-interpretation.
During moments of life-threatening danger, people often experience time to pass more slowly. Psychological data suggest that these experiences lie on a continuum: for instance, subjects exposed to mildly frightening stimuli over-estimate the durations of those stimuli relative to controls. This paper focuses on two conceptual puzzles which such experiences raise. The first puzzle is how (if at all) we can reconcile such experiences with what I have elsewhere argued is our naïve conception of temporal experience, according to which experience unfolds over a period of objective time in a way that precisely matches the apparent duration of the period it presents. The second puzzle is how to understand the connection between subjective temporal expansion and our evolved response to danger. For whilst many theorists (and subjects) claim that such experiences aid survival, it is obscure why an illusion of temporal expansion would have any survival benefit. By identifying the key assumptions behind these puzzles, I develop an account of duration perception which resolves both puzzles. The first puzzle is resolved by denying that we perceive duration relative to an objective, subject-independent measure. The second puzzle is resolved by claiming that our subjective measure of time is in itself relevant to our survival. Drawing on discussion of Locke’s views on duration perception I offer one natural candidate proposal: we perceive duration relative to our non-perceptual conscious mental activity.
The hypothesis of extended cognition (HEC) derives from a 1998 paper entitled The Extended Mind by Andy Clark & David Chalmers. Within this paper, Clark & Chalmers argue that cognitive processes can, and in some cases do, extend outside the body, encompassing features of the environment. Their argument is elaborated through two key thought experiments, and the Parity Principle. Their hypothesis is, strictly speaking, not reliant on any particular theory of mind, but it is often taken together with, and further justified by, functionalism. I intend to argue that there are strong independent arguments for adopting both functionalism and HEC, and that there are further arguments for adopting their conjunction. I will defend this extended functionalism against its critics, particularly Mark Sprevak, principally by limiting HEC only to cases where the mind extends over what I will call, following Clark, a transparent tool.
One way to characterize the specific relation that we have only with our own body is to say that only our own body appears to us from the inside. Although widely accepted, the nature of this specific experiential mode of presentation of the body is rarely spelled out. Most definitions amount to little more than lists of the various body senses (including senses of posture, movement, heat, pressure, and balance). It is true that body senses give a privileged informational access to our own body that we have for no other bodies, by contrast to external senses like vision, which can take many bodies as their object. But a theory of bodily awareness needs to take into account recent empirical evidence that indicates that bodily awareness is infected by a plague of multisensory effects, regardless of any dichotomy between body senses and external senses. Here I will argue in favour of a certain kind of constitutive multimodality thesis of bodily experiences: without multimodality, we would experience our body differently. I will show that the body senses fail to fully account for the content of bodily experiences. I will then propose that vision contributes to compensate for the insufficiencies of the body senses in people who can see. I will finally argue that the multimodality of bodily experiences does not come at the cost of their privileged access to one’s body.
Shared agency is paradigmatically involved when two or more people paint a house together, tidy the toys away together, or lift a two-handled basket together. To characterise shared agency, some philosophers have appealed to a special kind of intention or structure of intention, knowledge or commitment often called ‘shared intention’. The idea is that shared agency stands to shared intention much as ordinary, individual agency stands to ordinary individual intention. In this talk I shall use this parallel between individual and shared intention to argue that there are forms of shared agency characterising which requires appeal to motor representation. Shared agency is not only a matter of what we intend: sometimes it constitutively involves interlocking structures of motor representation. This has consequences for understanding the roles of shared agency in evolution and development.
I argue that there is more in the temporal world than events: that it is important to recognize a category of process. This makes a general claim in metaphysics. I suggest that it matters in philosophy of mind.
I’ll explore the idea that some visible properties figure essentially in the qualitative character of visual experience. I’ll argue that this idea is coherent and well motivated, provided that we reject two traditional assumptions: (1) that the properties in question are maximally determinate, rather than determinable; (2) that introspection enables us to form true beliefs about the exact character of our visual experiences.
Photograph taken by Mike Peel
In this talk, I submit that it is the controlled part of skilled action; that is, that part of an action that accounts for the exact, nuanced ways in which a skilled performer modifies, adjusts and guides her performance for which we must account, if we are to have an adequate, philosophical account of skill. My claim is that control is at the heart of skilled action because the particular way in which a skill is instantiated is precisely what defines how skillful that action is. That is, the level of skill that one possesses is in direct proportion to the amount of control that one exerts over the performance of one’s own actions. Control is what constitutes the difference between a gold medal performance and a bronze medal one, and between the elite athlete and the novice one. It is control that is learned through practice and control that allows us to gasp at the beauty, elegance, and perfection of a skilled performance. One may be unsurprised to learn that when it comes to a philosophical account of skill, both Intellectualists of the Stanley variety and Anti-intellectualists of the Dreyfus sort forego a satisfactory account of control. One may be surprised, however, to learn that both Stanley and Dreyfus forgo such an account for precisely the same reason: each reduce control to a brute, passive, unintelligent, automatic process, which then prevents them from producing a substantive account of how such processes are flexible, manipulable, subject to learning and improvement, responsive to intentional contents at the personal-level, and holistically integrated with both cognitive and motor states. Stanley and Dreyfus make the same mistake for very different reasons, but in making it, they both lose control. In this talk, I will review the reasons for their mistakes and identify the kinds of control that both leave out.