It's just hit me.
A corporeal schema is not a structure, it's a background - as MMP says, it is the flip-side of the projective activity that leaves objects round about it as traces of its acts and at the same time uses them as springboards to future acts.
Existence is not any particular structure, it is a movement from contingency to necessity (sedimentation) and necessity to contingency (innovation).
Our discussion of boundedness is meant to reinstall the subpersonal in a situation, but this situation is never entirely biological.
Merleau-Ponty was concerned to oppose a Kantian transcendental ego that constitutes its world in its entirety. But he would also be opposed to a pan-biologism that sees on biological situation, and which recognize no dialectical relation between this biological situation and a personal one.
But how are we to think of the influence of the personal on the biological?
The idea of the discussion of boundedness was not to trace prepersonal commitment to biological structure, but to interpret those structures in terms of that commitment, i.e. in terms of existence.
We do not exist in a purely biological situation, in fact, it takes incredible suffering to remain in such a situation for any extended period of time. The objection to Kantianism is that the organism is never entirely incorporated into our personal life. But the reverse is also true. Our personal life is rarely lived in that prepersonaltime of pure sensibility.
Friday, July 30, 2004
Tuesday, June 15, 2004
What's wrong with investing the sub-personal with intentionality?
Gallagher, in his survey of ways of relating cognitive science to phenomenology suggests that we need to be vigilant against investing sub-personal processes with intentional contents.
Now, to be sure, there is definitely something wrong with arbitrarily or unreflectively investing, for example, beliefs and desires into brain processes. Dennett has done a wonderful job of articulating this in his discussion of "What the frog's eye tells the frog's brain".
However, what empirical grounds are there for denying that these processes (just calling them processes rather than behaviours seems to prejudge the question) are directed toward anything at all? It seems to me that ruling out any and all talk of intentionality at this level is just as metaphysical (in the bad sense) as ruling it in.
It rests on the presupposition that the personal and the subpersonal are discrete levels of description. There are however good reasons for undermining their discreteness, not least because though treating them as discrete allows us to describe pathological behaviours, it makes normal behaviour seem mysterious.
Ron McClamrock's argument for a task-centred version of the information processing hypothesis, which he claims will make it compatible with externalism, will provide an opportunity for us to propose a conception of the subpersonal which exhibits a kind of intentionality that is scientifically bearable. Our conception involves expanding on McClamrock's suggestion that a task-based account of cognition might even be extended to the design level.
Our conception attempts to take account of the way in which scientific taxonomies are developed, or rather, the responsibility to the organism that prompts scientific taxonomies to change. Recognizing this responsibility does not allow us to shortcircuit the development of these taxonomies, but it does capture the sense in which these taxonomies are always a scientific attempt to do justice to the otherness of its objects. In this case, the move to enactivism and DST is an attempt to bear witness to the self-conceptuality of the organism (Morris), i.e. to the fact that the dimension of its description are and should be relative to its strategies of inherence.
So we revise the relation of embeddedness, from something akin to the relations of immersion or component-to-system, to something more like a inalienable strategic relation. In doing so, we are able to inject a little intentionality - an intentionality that needn't be equated with the intentionality of projects - into our third person descriptions.
Now, to be sure, there is definitely something wrong with arbitrarily or unreflectively investing, for example, beliefs and desires into brain processes. Dennett has done a wonderful job of articulating this in his discussion of "What the frog's eye tells the frog's brain".
However, what empirical grounds are there for denying that these processes (just calling them processes rather than behaviours seems to prejudge the question) are directed toward anything at all? It seems to me that ruling out any and all talk of intentionality at this level is just as metaphysical (in the bad sense) as ruling it in.
It rests on the presupposition that the personal and the subpersonal are discrete levels of description. There are however good reasons for undermining their discreteness, not least because though treating them as discrete allows us to describe pathological behaviours, it makes normal behaviour seem mysterious.
Ron McClamrock's argument for a task-centred version of the information processing hypothesis, which he claims will make it compatible with externalism, will provide an opportunity for us to propose a conception of the subpersonal which exhibits a kind of intentionality that is scientifically bearable. Our conception involves expanding on McClamrock's suggestion that a task-based account of cognition might even be extended to the design level.
Our conception attempts to take account of the way in which scientific taxonomies are developed, or rather, the responsibility to the organism that prompts scientific taxonomies to change. Recognizing this responsibility does not allow us to shortcircuit the development of these taxonomies, but it does capture the sense in which these taxonomies are always a scientific attempt to do justice to the otherness of its objects. In this case, the move to enactivism and DST is an attempt to bear witness to the self-conceptuality of the organism (Morris), i.e. to the fact that the dimension of its description are and should be relative to its strategies of inherence.
So we revise the relation of embeddedness, from something akin to the relations of immersion or component-to-system, to something more like a inalienable strategic relation. In doing so, we are able to inject a little intentionality - an intentionality that needn't be equated with the intentionality of projects - into our third person descriptions.
Friday, June 04, 2004
Extended Minds or Mindtools
Inspired by reading Jonasson on Mindtools (esp final sections)
http://www.coe.missouri.edu/~jonassen/Mindtools.pdf
The idea behind the extended mind is that we offload cognitive processing onto the environment.
The fact that we can do this suggests that cognitive processing is not inherently internal.
The tendency then is to demolish the biological notion of cognitive identity which treats the brain/skin barrier as the grounds of cognitive identity.
From there, it is tempting to abandon cognitive identity altogether (meme theory).
But, since we attribute intelligence to cognitive identities, this move involves abandoning the attribution of intelligence.
One way of resisting this tendency is to distinguish between intelligence and its tools.
However, a hard distinction between intelligence and its tools, which are ultimately its means of expression, is problematic, not least because we do not know what intelligence could be in the absence of its expression.
We can't treat intelligence as that which is common to intelligent expression without blurring the boundaries between distinct knowledge domains; boundaries which reflect differences in means as they do differences in form.
The distinction between intelligence and its tools is about preserving our ability to ascribe intelligence to someone - which we need for coordination of projects and scorekeeping practices (such as those Brandom describes) - and the question is: what sort of cognitive identity do we actually have?
"Our goal as technology-using educators, should be to allocate to the learners the cognitive responsibility for the processing they do best while requiring the technology to do the processing that it does best. Rather than using the limited capabilities of the computer to present information and judge learner input (neither of which computers do well) while asking learners to memorize information and later recall it (which computers do with far greater speed and accuracy than humans), we should assign cognitive responsibility to the part of the learning system that does it the best. Learners should be responsible for recognizing and
judging patterns of information and then organizing it, while the computer system should perform calculations, store, and retrieve information." (15)
Notice the different verbs used here: learners are responsible for their tasks, computer systems perform theirs. This distinction is amplified in the following quote:
"Derry and LaJoie (1993) argue that "the appropriate role for a computer system is not that of a teacher/expert, but rather, that of a mind-extension "cognitive tool" (p. 5). Mindtools are unintelligent tools, relying on the learner to provide the intelligence, not the computer. This means that planning, decision-making, and self-regulation of learning are the responsibility of the learner, not the computer. However, computer systems can serve as powerful catalysts for facilitating these skills assuming they are used in ways that promote reflection, discussion, and problem solving." (14, orig. emphasis)
It is almost ironic that the very offloading of cognitive processes that inspires the extended mind hypothesis and its concomitant notion of distributed intelligence should motivate educationalists to emphasize the importance of unintelligent learning tools. What this suggests is that while intelligence can be considered as a property of the mind-world system, improving the sophistication of this system does not a fortiori imply improving the intelligence of its participants. So, it is just as inappropriate to collapse the distinction between participant and system, between intelligence and its means of expression, as it is to uncritically assert their independence. Simply put, the cognitive identity we ascribe intelligence to - participant or system - makes a difference. If it didn't, then smarter computers would make for smarter students, which is exactly what the educationalists reject.
This has some surprising consequences for embodied, embedded cognitive science. It suggests that intelligence is attributable to neither the body nor the total environment. We cannot attribute intelligence to the environment because intelligence not devolve from system to individual. It is as little a characteristic of the environmental totality as it is a feature of some Kantian transcendental unity of apperception. Nor can we attribute intelligence to the biological body, and this is because the body is embedded and its processes are meaningless in the absence of a background, and similarly meaningless in abstraction from the way they couple to this background.
http://www.coe.missouri.edu/~jonassen/Mindtools.pdf
The idea behind the extended mind is that we offload cognitive processing onto the environment.
The fact that we can do this suggests that cognitive processing is not inherently internal.
The tendency then is to demolish the biological notion of cognitive identity which treats the brain/skin barrier as the grounds of cognitive identity.
From there, it is tempting to abandon cognitive identity altogether (meme theory).
But, since we attribute intelligence to cognitive identities, this move involves abandoning the attribution of intelligence.
One way of resisting this tendency is to distinguish between intelligence and its tools.
However, a hard distinction between intelligence and its tools, which are ultimately its means of expression, is problematic, not least because we do not know what intelligence could be in the absence of its expression.
We can't treat intelligence as that which is common to intelligent expression without blurring the boundaries between distinct knowledge domains; boundaries which reflect differences in means as they do differences in form.
The distinction between intelligence and its tools is about preserving our ability to ascribe intelligence to someone - which we need for coordination of projects and scorekeeping practices (such as those Brandom describes) - and the question is: what sort of cognitive identity do we actually have?
"Our goal as technology-using educators, should be to allocate to the learners the cognitive responsibility for the processing they do best while requiring the technology to do the processing that it does best. Rather than using the limited capabilities of the computer to present information and judge learner input (neither of which computers do well) while asking learners to memorize information and later recall it (which computers do with far greater speed and accuracy than humans), we should assign cognitive responsibility to the part of the learning system that does it the best. Learners should be responsible for recognizing and
judging patterns of information and then organizing it, while the computer system should perform calculations, store, and retrieve information." (15)
Notice the different verbs used here: learners are responsible for their tasks, computer systems perform theirs. This distinction is amplified in the following quote:
"Derry and LaJoie (1993) argue that "the appropriate role for a computer system is not that of a teacher/expert, but rather, that of a mind-extension "cognitive tool" (p. 5). Mindtools are unintelligent tools, relying on the learner to provide the intelligence, not the computer. This means that planning, decision-making, and self-regulation of learning are the responsibility of the learner, not the computer. However, computer systems can serve as powerful catalysts for facilitating these skills assuming they are used in ways that promote reflection, discussion, and problem solving." (14, orig. emphasis)
It is almost ironic that the very offloading of cognitive processes that inspires the extended mind hypothesis and its concomitant notion of distributed intelligence should motivate educationalists to emphasize the importance of unintelligent learning tools. What this suggests is that while intelligence can be considered as a property of the mind-world system, improving the sophistication of this system does not a fortiori imply improving the intelligence of its participants. So, it is just as inappropriate to collapse the distinction between participant and system, between intelligence and its means of expression, as it is to uncritically assert their independence. Simply put, the cognitive identity we ascribe intelligence to - participant or system - makes a difference. If it didn't, then smarter computers would make for smarter students, which is exactly what the educationalists reject.
This has some surprising consequences for embodied, embedded cognitive science. It suggests that intelligence is attributable to neither the body nor the total environment. We cannot attribute intelligence to the environment because intelligence not devolve from system to individual. It is as little a characteristic of the environmental totality as it is a feature of some Kantian transcendental unity of apperception. Nor can we attribute intelligence to the biological body, and this is because the body is embedded and its processes are meaningless in the absence of a background, and similarly meaningless in abstraction from the way they couple to this background.
Tuesday, June 01, 2004
So, if I were... here's what I'd say. Pt 2
Let's try to get our bearings. Cognitive science, roughly speaking, is the attempt to understand human consciousness as information processing. Cognitive science takes a generally functionalist approach to the mind, which means that it characterises mental states in terms of their functional roles, independently of the physiology of the brain. This way, while it remains broadly materialist, it hopes to avoid making any substantive metaphysical claims.
To understand its embodied, embedded strand (also known as enactivism), it is important to understand the changes cognitive science has undergone in recent years. As little as ten years ago, cognitive science could be characterised by its adhesion to the symbolic hypothesis: the claim that cognition amounts to symbol-processing, the ratiocination of discrete pieces of information according to inferential rules. On this view, the senses and actions are accordingly understood as the transduction of
To understand its embodied, embedded strand (also known as enactivism), it is important to understand the changes cognitive science has undergone in recent years. As little as ten years ago, cognitive science could be characterised by its adhesion to the symbolic hypothesis: the claim that cognition amounts to symbol-processing, the ratiocination of discrete pieces of information according to inferential rules. On this view, the senses and actions are accordingly understood as the transduction of
Subscribe to:
Posts (Atom)