Mind: The Argument from Evolutionary Biology,

(A Working Model)

Jerome Iglowitz

(This paper is a refinement and enlargement of the first chapter of my original book, “Virtual Reality: Consciousness Really Explained! (Original and First Edition), available on the website cited below.)

[Note:  I have recently posted a new book with a free download: “Virtual Reality: Consciousness Really Explained” (Third Edition):   This writing better explains my writings from the standpoint of new developments in the foundations of modern mathematics and should help with your comprehension of this one.]

 

Short Abstract

 

In this paper I will propose the conceptually simplest, (though technically most difficult), part of a three part hypothesis which I propose is the first viable solution to the problem of consciousness. (See “Virtual Reality:  Consciousness Really Explained (Third Edition) ” for my whole answer.)  It is a hard answer, but I think it actually works.  It starts out in a highly abstract manner, but it ends in a very specific conclusion with a pointed example drawn from contemporary biology, (see "Appendix: Freeman and Automorphism").

 

This leg of my composite hypothesis proposes that the simplest evolutionary rationale for the brains of complex organisms was neither representation nor reactive parallelism as is generally presupposed, but was specifically an operational and internal (self)organization of primitive, (but blind!), biologic process instead. I will propose that our cognitive objects themselves are deep operational metaphors only!)  They are operational metaphors of primitive biological response and they are not informational referents to environment. I propose that they are the specific organizational tools of the megacellular colossus.

 

I urge that a pointedly operational organization was an evolutionary necessity to enable an adroit functioning of profoundly complex metacellular organisms in a hostile and overpoweringly complex environment. I argue that this organization was antithetical to a representative role however because the latter ignores the crucial factor of urgency -i.e. danger / risk, (the large database problem)! I have argued elsewhere, ("Consciousness: a Simpler Approach to the Mind-Body Problem"), that this hypothesis, (in concert with ancillary logical and epistemological hypotheses), opens the very first real possibility for an actual and adequate solution of the problem of “consciousness” by allowing an operational use of the mathematical concept of “implicit definition” which, in thereby, supplies a theory of meaning and the ultimate rationale for “consciousness” itself. 

 

Ultimately, however, just as physics was forced to epistemology as a necessary incorporation into its essential science in its advances into Relativity and Quantum Theory, so too is biology forced to epistemology to enable the science of mind.  Biological organisms cannot know the world around them, but they can and must operate in it.  The question devolves to how well they are capable of doing so and whether there is just one unique way.  The particular epistemology necessitated by this problem and which finally elucidates the problem of mind is already extant however: it is embodied in Ernst Cassirer’s “Philosophy of Symbolic Forms”. (See VR:Chapter 4)  I call it “Ontic Indeterminism” which I think characterizes it better.  Mine is a strange idea admittedly, and somewhat complex, but I think it is true.  It is explanatory for all the aspects of mind.

 

Long Abstract

 

This paper argues against mind as a representative device. But, unlike most such arguments, it sets forth a specific counterproposal, one that is not eliminative for “mind” as we normally mean the word. I argue that the objects of mind, (i.e. percepts, concepts), are, in fact, operational metaphors. I argue they are biological artifacts organizing and optimizing primitive metacellular response.

 

I begin with a series of examples challenging our normal expectations of the potentialities of models per se and introduce a new and specific kind of model –I call it a “schematic model”. This is not the abstractive model which usually goes by that name. It is, rather, a model whose very objects, (icons), are specifically and functionally molded to explicitly serve the purpose for which the model was designed. That purpose, I propose, was organizational efficiency. I go from very simplistic illustrations: training seminar models in a business setting -to classroom models in a university -to the models of control system engineers as actualized in the instruments and controls they fabricate. Finally I examine GUI’s, (graphic user interfaces), of computers, models constructed by software engineers. All these demonstrate a neglected potential of models for optimizing function over representation. They illustrate a schematism which is not abstractive. This series of examples is not intended as a linear series however, but as a logarithmic one. Please take heed.

 

Ultimately, it is the case of the GUI that I argue is the case of the mind. The particular GUI that I suggest is schematic however, (in the sense above), not representational or hierarchical. It is a specifically functional and non-hierarchical model whose penultimate purpose was organization. It was optimized for performance however, not information. It is like Edelman’s “topobiological maps” but seen through the filter of his larger non-topological “global mapping” to serve process rather than information. Alternatively, it might be seen from the perspective of Walter Freeman’s chaotic interface –as the rationale of his intentional “frames”. Freeman, in fact, supplies an almost exact illustration of the case I will make and I explore it in depth.

Next I make the formal and abstract logical argument from the perspective of functional efficiency. I argue that it was a schematic and virtual model, and not a representative one that was necessary for the optimization of performance in highly complex and specifically dangerous environments. But this is exactly the case for the evolutionary biology of complex metacellular organisms. Our megacellular world is overwhelmingly complex and specifically dangerous.

 

Finally, I present a unifying argument joining the conclusions of the present paper with those of my paper: “Consciousness: a Simpler Approach to the Mind-Brain Problem”, (Iglowitz, 2001). Each approached the mind-brain problem from a different perspective and reached a radical, though plausible conclusion. The conclusions are different. Here I argue that the two conclusions are, in fact, compatible and synergistic. I argue that the implicitly defined, virtual and logical objects argued in the prior paper are the same as the organizational artifacts of the present paper. The rationale is simple: for modern science, our very logic itselfand all it contains- must necessarily be reduced to biology. (The alternative is mysticism!) The implicitly defined logical objects which reify “mind” are thus ultimately biological objects. They are organizational artifacts of the brain. 

 

Ultimately, however, it is the epistemological implications of each of these themes that ties the whole of the problem together.  Man, as a biological organism, cannot know the world in which he exists.  What man can do is provide productive hypotheses with which to act in it.  Organisms act, they do not know!  The mistake lies in the assumption that there can be only one comprehensive theory which exhausts it.  Cassirer argued otherwise in his "Theory of Symbolic Forms".  That thesis leads to a Kantian conclusion of ontic indeterminacy -i.e. we as biological organisms cannot know what the world really is. 

 

This is a terrible conclusion, but it leads to an actual answer to the problem we originally posed.  That answer is grounded in the very fundamentals of scientific belief, not of knowledge.  It is what scientific realism necessarily starts with.  It incorporates what Putnam, Lakoff and Edelman call the essential postulates of realist reason.  But these are necessarily postulates only.  They are:  (1)  The belief in an external reality beside and including ourselves, (2) the belief in the reality of experience, and (3) (I propose) that there must be some connection between the two.  It is the substance of the latter that I propose is the substance of mind.  This is the ground which is developed in my book:  “Virtual Reality:  Consciousness Really Explained” (Third Edition”. (see especially chapters 6 through 10).

 


CONTENTS, (Hyperlinks)

1. REPRESENTATION: THE PERSPECTIVE FROM BIOLOGY

2. THE SCHEMATIC MODEL: DEFINITION AND EXAMPLES. (DEFINING WHAT IT MEANS TO BE “AN OBJECT”)

2.1 THE SIMPLEST CASE: A DEFINITION BY EXAMPLE

2.1.1   REVERSING OUR PERSPECTIVE:

2.2 A CASE FOR SCHEMATISM MORE SPECIFIC TO OUR SPECIAL PROBLEM: (NARROWING THE FOCUS)

        (THE ENGINEERING ARGUMENT)

2.3 THE “GUI”: THE MOST PERTINENT AND SOPHISTICATED EXAMPLE OF A SCHEMATIC MODEL

        (THE SPECIAL CASE)

2.4 TOWARDS A BETTER BIOLOGICAL MODEL

2.4.1 BIOLOGY, THE REAL THING: FREEMAN’S MODEL

2.4.2 AN EXPLICIT MODEL OF THE MIND:

3. THE FORMAL AND ABSTRACT PROBLEM:

3.1 THE FORMAL ARGUMENT

3.2 THE SPECIFIC CASE OF BIOLOGY

3.3 RETRODICTIVE CONFIRMATION

3.4 CONCLUSION, (SECTION 3)

4. THE CONCORDANCE: BIOLOGY’S PROPER CONCLUSION

5. PLAIN TALK:

6.  APPENDIX, (FREEMAN AND AUTOMORPHISM) 
7. CONCLUSIONS

 


1. REPRESENTATION: THE PERSPECTIVE FROM BIOLOGY

Sometimes we tentatively adopt a seemingly absurd or even outrageous hypothesis in the attempt to solve an impossible problem -and see where it leads. Sometimes we discover that its consequences are not so outrageous after all. I agree with Chalmers that the problem of consciousness is, in fact, “the hard problem”. I think it is considerably harder than anyone else seems to think it is however. I think its solution requires new heuristic principles as deep and as profound as, (though different from), the “uncertainty”, “complementarity” and (physical) “relativity” that were necessary for the successful advance of physics in the early part of the 20th century. I think it involves an extension of logic as well. Consideration of those deep cognitive principles: “cognitive closure”, (Kant and Maturana), “epistemological relativity”, (Cassirer and Quine), and of the extension of logic, (Cassirer, Lakoff, Iglowitz), must await another discussion however. {3}

Sometimes it is necessary to walk around a mountain in order to climb the hill beyond. It is the mountain of “representation”, and the cliff, (notion), of “presentation” embedded on its very face, which blocks the way to a solution of the problem of consciousness. This hypothesis points out the path around the mountain.

Maturana and Varela’s “Tree of Knowledge”, {4} is a compelling argument based in the mechanics of physical science and biology against the very possibility of a biological organism’s possession of a representative model of its environment. They and other respected biologists, (Freeman, Edelman), argue against even “information” itself. They maintain that information never passes between the environment and organisms; there is only the “triggering” of structurally determinate organic forms. I believe theirs is the inescapable conclusion of modern science.

I will now present a specific and constructive counterproposal for another kind of model however: i.e. the “Schematic Operative Model”. Contrary to the case of the representative model, it does remain viable within the critical context of modern science. I believe that we, as human organisms, do in fact embody a model. I believe it is the stuff of mind!

2. THE SCHEMATIC MODEL: DEFINITION AND EXAMPLES. (DEFINING WHAT IT MEANS TO BE “AN OBJECT”)

Normally, when we think of “models”, we mean reductive, or at least parallel models. In the first we think of a structure that contains just some of the properties of what is to be mirrored. When we normally use the term “schematic model”, we talk about the preservation of the “schema”, or “sense” of what is mirrored. Again it is reductive, however- it is logically reductive. It is, as has been claimed, “just a level of abstraction”. There are other uses for models, however, -those that involve superior organizations! This is the new sense of “schematic model” that I propose to identify.

2.1 THE SIMPLEST CASE: A DEFINITION BY EXAMPLE

Even our most simplistic models, the models of even our mundane training seminars, suggest the possibility of another usage for models very different than as representative schemas. They demonstrate the possibility of a wholly different paradigm whose primary function is organization instead.

Look first at the very simplest of models. Consider the models of simplistic training seminars -seminars in a sales organization for instance. “’Motivation’ plus ‘technique’ yields ‘sales’.”, we might hear at a sales meeting. Or, (escalating just a bit), “’Self-awareness of the masses’ informed by ‘Marxist-dialectic’ produces ‘revolution’!”, we might hear from our local revolutionary at a Saturday night cell meeting. Visual aids, (models), and diagrams are ubiquitous in these presentations. A lecturer stands at his chalkboard and asks us to accept drawings of triangles, squares, cookies, horseshoes... as meaningful objects -with a “calculus” of relations, (viz: an “arithmetic” of signs), {5} between them, (arrows, squiggles, et al). The icons, (objects), of those graphics are stand-ins for concepts or processes as diverse, (escalating just a bit more), as “motivation”, “the nuclear threat”, “sexuality”, “productivity”, and “evolution”. Those icons need not stand in place of entities in objective reality, however. What is “a productivity” or “a sexuality”, for instance? What things are these?

Consider this: two different lecturers might invoke different symbols, and a different “calculus” to explicate the same topic. In analyzing the French Revolution in a history classroom, let us say {6} , a fascist, a royalist, a democrat might alternatively invoke “the Nietzschean superman”, “the divine right of kings”, “freedom”, ... as actual “objects” on his blackboard, (with appropriate symbols). He will redistribute certain of the explanatory aspects, (and properties), of a Marxist’s entities, (figures) -or reject them as entities altogether. {7} That which is unmistakably explanatory, (“wealth”, let us say), in the Marxist’s entities, (and so which must be accounted for by all of them), might be embodied instead solely within the fascist’s “calculus” or in an interaction between his “objects” and his “calculus”. Thus and conversely the Marxist would, (and ordinarily does), reinterpret the royalist’s “God”-figure, (and his –the Marxist’s- admitted function of that “God” in social interaction {8} ), as “a self-serving invention of the ruling class”. It becomes an expression solely of his “calculus” and is not embodied as a distinct symbol, (i.e. object). Their “objects” - as objects - need not be compatible! As Edelman noted: “certain symbols do not match categories in the world . ... Individuals understand events and categories in more than one way and sometimes the ways are inconsistent.” {9}

Figure 1, (Madeline’s Chalkboard)

Figure 2, (Marx’s Chalkboard)

What is important is that a viable calculus-plus-objects, (a given model), must explain or predict “history” -that is, it must be compatible with the phenomena, (in this particular example the historical phenomena). But the argument applies to a much broader scope. I have argued elsewhere, {10} (following the strong case of Hertz and Cassirer), that the same accounting may be given of competing scientific theories, philosophies, and, indeed, of any alternatively viable explanations.

Consider Heinrich Hertz: “The [scientific] images of which we are speaking are our ideas of things; they have with things the one essential agreement which lies in the fulfillment of the stated requirement, [of successful consequences], but further agreement with things is not necessary to their purpose. Actually we do not know and have no means of finding out whether our ideas of things accord with them in any other respect than in this one fundamental relation.” (Hertz, “Die Prinzipien der Mechanik”)

The existence of a multiplicity of alternately viable calculuses, (sic), and the allowable incommensurability of their objects {11} suggests an interpretation of those objects contrary to representation or denotation however. It suggests the converse possibility that the function and the motivation of those objects, specifically as entities, (in what I will call these “schematic models”), is instead to illustrate, to enable, -to crystallize and simplify the very calculus of relation proposed between them! {12} These "objects", I propose, are manifestations of the structure; the structure is not a resolution of the objects.

2.1.1 REVERSING OUR PERSPECTIVE:

I propose that the boundaries -the demarcations and definitions of these schematic objects, (their “contiguity” if you will) -are formed specifically to meet the needs of the operations. I propose that they exist to serve structure- not the converse. {13} Their objects –specifically as objects - serve to organize process, (i.e. analysis or response). They are not representations of actual objects or actual entities in reality. {14} This, I propose, is why they are “things”. They functionally bridge reality in a way that physical objects do not and I suggest that they are, in fact, metaphors of analysis or response. The rationale for using them, (as any good “seminarian” would tell you), is clarity, organization and efficiency.

Though set in a plebian context, the “training seminar”, (as presented), illustrates and defines the most general and abstract case of schematic non-representative models in that it presumes no particular agenda. It is easily generalized: it might as well be a classroom in nuclear physics or mathematics, the boardroom of a multinational corporation, -or a student organizing his lovelife on a scratchpad.

Figure_ 3 Figure 4

2.2 A CASE FOR SCHEMATISM MORE SPECIFIC TO OUR SPECIAL PROBLEM: (NARROWING THE FOCUS)

(THE ENGINEERING ARGUMENT)

Engineers’ instrumentation and control systems provide an example of the organizational, non-representational use of models and “entities” in another setting. These entities, and the context in which they exist, provide another kind of “chalkboard”. {15}  Their objects need not mirror objective reality either. A gauge, a readout display, a control device, (the “objects” designed for such systems), need not mimic a single parameter -or an actual physical entity. Indeed, in the monitoring of a complex or dangerous process, it should not. Rather, the readout for instance should represent an efficacious synthesis of just those aspects of the process which are relevant to effective response, -and be crystallized around those relevant responses! A warning light or a status indicator, for instance, need not refer to just one parameter. It may refer to electrical overload and/or excessive pressure and/or... Or it may refer to an optimal relationship, (perhaps a complexly functional relationship), between many parameters -to a relationship between temperature, volume, mass, etc. in a chemical process, for instance.

The exactly parallel case holds for its control devices. A single control may orchestrate a multiplicity of (possibly disjoint) objective responses. The accelerator pedal in a modern automobile, as a simple example, may integrate fuel injection volumes, spark timing, transmission gearing...

Ideally, (given urgent constraints), instrumentation and control might unify in the selfsame “object”. We could then manipulate the very object of the display and it in itself could be the control device as well. Consider the advantages of manipulating a graphic or tactile object which is simultaneously both a readout and a control mechanism under urgent or dangerous circumstances. Now think about this same possibility in relation to our ordinary objects of perception -in relation to the sensory-motor coordination of the brain and the objects of naive realism in the real world! The brain is a control system, after all. It is an organ of control and its mechanics must be considered in that perspective. Its function is exceedingly complex and the continuation of life itself is at stake. It is a complex and dangerous world. Might not our naïve world itself be such a combined schematic control system? {16}

2.3 THE “GUI”: THE MOST PERTINENT AND SOPHISTICATED EXAMPLE OF A SCHEMATIC MODEL
(THE SPECIAL CASE)

The “object” in the graphic user interface, (GUI), of a computer is perhaps the best example of a purely schematic usage currently available. In my simplistic manipulation of the schematic objects of my computer’s GUI, I am, in fact, effecting and coordinating quite diverse, disparate and unbelievably complex operations at the physical level of the computer. These are operations impossible, (in a practical sense), to accomplish directly. What a computer object, (icon), represents and what its manipulation does, at the physical level, can be exceedingly complex and disjoint. The disparate voltages and physical locations, (or operations), represented by a single “object”, and the (possibly different) ones effected by manipulating it, correlate to a metaphysical object only in this “schematic” sense. Its efficacy lies precisely in the simplicity of the “calculus” it enables!  (It is the interface that must be simple!)

Contemporary usage is admittedly primitive. Software designers have limiting preconceptions of the “entities” to be manipulated, of a necessary preservation of hierarchy, and of the operations to be accomplished in the physical computer by their icons and interface. But I assert that GUI’s and their “objects”, (icons), have a deeper potentiality of “free formation”. They have the potential to link to any selection across a substrate, i.e. they could “cross party lines”. They could cross categories of “things in the world”, (Lakoff’s “objectivist categories”), {17} ), and acquire thereby the possibility of organizing on a different and the most pressing issue: i.e. urgency / risk. They need preserve neither parallelism nor hierarchy.

Biology supplies fortuitous examples of the sort of thing I am suggesting for GUI’s –e.g. in the brain’s “global mapping” noted by Edelman. {18}, (I will present Walter Freeman’s more explicit case in detail shortly). The non-topological connectivity Edelman notes from the brain’s “topobiological” maps, {19} and specifically the connectivity, (the “global mapping”), from the objects of those maps to the non-mapped areas of the brain supplies a concrete illustration the kind of potential I wish to urge for a GUI. Ultimately I will urge it as the rationale for the brain itself. This global mapping allows “... selectional events”, [and, I suggest, their “objects” as well], “occurring in its local maps ... to be connected to the animal’s motor behavior, to new sensory samplings of the world, and to further successive reentry events.” But this is explicitly a non-topological mapping. This particular mapping, (the global mapping), does not preserve contiguity. Nor need it preserve hierarchy.

Here is a biological model demonstrating the more abstract possibility of a connection of localized “objects” {20} , (in a GUI), to non-topological (distributed) process -to “non-objectivist categories “, using Lakoff’s terminology. As such, it illustrates “schematism” in its broadest sense. Edelman’s fundamental rationale is “Neural Darwinism”, the ex post facto adaptation of process, not “information”, and is thus consistent with such an interpretation. It does not require “information”. Nor does it require “representation”. Edelman, (unfortunately), correlates his topobiological maps, (as sensory maps), directly and representatively, (i.e. hierarchically), with “the world”. This is a clear inconsistency in his epistemology. It is in conflict with his early and continual repudiation of “the God’s eye view” on which he grounds his biologic epistemology.

 

Figure 5: A Graphic Rendering of Edelman’s Epistemology
(Note: hierarchy and contiguity are implicit!)

But what if we turn Edelman’s perspective around however? What if we blink the “God’s eye” he has himself so strongly objected to, and step back from the prejudice of our human (animal) cognition. What if the maps and their objects both were taken as existing to serve blind primitive process instead of information? (Figure 6) What if they are organizational rather than representative?

Figure 6: A More Consistent Rendering of Edelman’s Epistemology Suggesting a New Paradigm for GUI’s
(Note: Neither hierarchy nor contiguity are implicit in this model.)

This is the case I wish to suggest as an illustration of the most abstract sense of the GUI, (and which I will argue shortly) –i.e. a non-topological correlation! It opens a further fascinating possibility moreover. It suggests that evolution’s “good trick”, (after P.S. Churchland’s usage), was not representation, but was, rather, the organization of primitive process in a topological context. It suggests that the “good trick” was evolution’s creation of the cortex in itself!

2.4 TOWARDS A BETTER BIOLOGICAL MODEL

Figure 7

2.4.1 BIOLOGY, THE REAL THING: FREEMAN’S MODEL

What is needed now is a more explicit model, and a specific research problem to embody the proposal. Edelman’s “global mapping” is all very well and good, but it doesn’t really do what it has to. It is “too philosophical” and Popper would have predictably urged, not falsifiable. A more detailed and quite specific model comes from the work of the noted neurophysiologist, Walter J. Freeman. Based on extensive research first with the olfactory cortex, (arguably evolution’s first cortex), and then with the visual and other cortices, Freeman argues that the brain does not process information at all –it does other things! He has approached the problem directly and addressed the crux of the issue: what is the correlation between sensory input and resultant brain states? Is there one? This is explicitly empirical research clearly pertinent to the problems of parallelism and hierarchy and, if its conclusions are viable, is totally relevant to my argument. It is falsifiable! But, conversely, it is capable of falsifying the very premise of the standard paradigm -i.e. that of “representation” itself.

First, however, please look at Freeman’s model, and note the striking similarity to my own Figure 6 just above. Strikingly similar, that is, if we interpret his “topographic projections” as following behind Edelman’s “topobiological maps”. (feature detectors?)

 

Figure 8, (Freeman’s Figure 2)

“Fig. 2. The input path from receptors to the bulb has some topographic specificity. The output path to the prepyriform has broad axonal divergence, which provides a basis for spatial integration of bulbar output and extraction of the “carrier” wave. (From Freeman 1983, reproduced by permission.)

“It is based on a striking difference between two types of central path, one that provides topographic mapping from an array of transmitting neurons to an array of receiving neurons, the other having divergence of axons that provides for spatial integration of the transmitted activity.” (Freeman, 1994, my emphasis). Now compare Freeman’s Figure 2 with my Figure 6 shortly before it. This is an explicit case, truly drawn from biology illustrating the non-topological potential of virtual systems. It is not a topological mapping, does not preserve hierarchy, and does not preserve information. It is an actual case demonstrating the ultimate potential of schematic GUI’s for distributing, (or conversely, for centralizing), function into objects. Freeman’s model exposes a new paradigm for models. It demonstrates an organizational potential of models beyond representation.

Freeman begins:

“This book had its origin ... in an experimental finding....I was tracing the path taken by neural activity that accompanied and followed a sensory stimulus in brains of rabbits. I traced it from the sensory receptors into the cerebral cortex and there found that the activity vanished, just like the rabbit down the rabbit hole in ‘Alice in Wonderland’. What appeared in place of the stimulus-evoked activity was a new pattern of cortical activity that was created by the rabbit brain... My students and I first noticed this anomaly in the olfactory system... and in looking elsewhere we found it in the visual, auditory, and somatic cortices too... In all the systems the traces of stimuli seemed to be replaced by constructions of neural activity, which lacked invariance with respect to the stimuli that triggered them. The conclusion seemed compelling. The only knowledge that the rabbit could have of the world outside itself was what it had made in its own brain.(Freeman, 1995, my emphasis.)

What does this mean? What does it mean that the new pattern “lacked invariance” in regard to the stimuli? The “invariance” demanded correlates precisely to the “passage of information” -and it could not be found! “The visual, auditory, somatic and olfactory cortices generate... waves [that] reveal macroscopic activity ... from millions of neurons. ... These spatial AM patterns are unique to each subject, are not invariant with respect to stimuli, and cannot be derived from the stimuli by logical operations!(Freeman, 1994)

In this paper, (“Chaotic Oscillations...”), Freeman actually makes two cases –one structural and one functional. The structural case is purely physiological and, I think, very strong. It deals with the actual connectivity of nerve tissue and argues against the possibility of maintaining topological integrity within the cortex. (The other case is for “Chaos theory” as an explanation of function which I will refer to later.) The former is the case I want to emphasize here as I think it supplies an explicit illustration of my argument for the non-topological possibilities of schematic models. This is what I believe evolution did and how it did it.


He divides nerve physiology into two categories:

(1) Those which preserve topological integrity: this is the case for the sensory nerves for instance.

“Sensory neurons exist in large arrays in the skin, inner ear, retina...so that a stimulus is expressed as a spatial pattern...carried in parallel along sensory nerves. Typically only a small fraction of the axons in a nerve is activated...with the others remaining silent” [for isolation] “...so that the ‘signal’ of the stimulus is said to be ‘encoded’ in the frequencies of firing of that subset of axons subserving ...the activated...receptors.”

“The code of sensory, motor and autonomic parts of the peripheral nervous system is the spatial”, [topological], “pattern of temporal pulse rates. The same code appears to hold...for the ascending and descending pathways and relays in the brainstem and spinal cord. ...Serious efforts have been made to extend this model to the cerebral cortex with considerable success in characterizing the receptive fields and ‘feature detector’ properties of cortical neurons in primary sensory areas.” (Freeman, 1994) (But he argues that ‘feature detection” occurs only early in cortical process.)

Points on the retina, for instance, are mapped onto the cortex in a way that preserves the topology of the source and, apparently, feeds the feature detectors that are just the very beginning of cortical input.


(2) Within the cortex, however, it is a different story. Cortical neurons typically have short dendritic trees on the order of ½ millimeter. They are not, however, typically connected to the neurons physically adjacent to them!

“The main neurons in cortex ...intertwine at unimaginable density, so that each neuron makes contact with 5,000 to 10,000 other neurons within its dendritic and axonal arbors, but those neighbors so contacted are less than one percent of the neurons lying within the radius of contact. The chance of any one pair of cortical neurons being in mutual contact is less than one in a million.” (Freeman, 1995)

“Peripheral neurons”, [on the other hand], “seldom interact with other neurons, but offer each a private path from the receptor to the central nervous system. In contrast, each cortical neuron is embedded in a milieu of millions of neurons, and it continually transmits to a subset of several thousand other neurons sparsely distributed among those millions and receives from several thousand others in a different subset.” (Freeman, 1994)


This is reminiscent of Maturana’s comment:

“It is enough to contemplate this structure of the nervous system... to be convinced that the effect of projecting an image on the retina is not like an incoming telephone line. Rather, it is like a voice (perturbation) added to many voices during a hectic family discussion (relations of activity among all incoming convergent connections) in which the consensus of actions reached will not depend on what any particular member of the family says.” Maturana, (1987), 163-4.

And Edelman’s:

“… To make matters even more complicated, neurons generally send branches of their axons out in diverging arbors that overlap with those of other neurons, and the same is true of processes called dendrites on recipient neurons …. To put it figuratively, if we ‘asked’ a neuron which input came from which other neuron contributing to the overlapping set of its dendritic connections, it could not ‘know’.” (Edelman, 1992, p.27)

Peripheral neurons are relatively isolated, (“private”), within nerve bundles and support a topological case to the point of ‘feature detection’ at cortex. Within the cortices, however, we are dealing with a different sort of connective process. We are no longer dealing with parallel or hierarchical, (i.e. information preserving), mappings. Because each cortical neuron is embedded in a milieu of millions of neurons, it “continually transmits and receives from several thousand others” and therefore has “continual background activity owing to its synaptic interactions with its neighbors”. This is a characteristic property of cortical neural populations not shared by peripheral neuron arrays.

Cortical process disburses function spatially through the brain, (“with strong axonal divergence”), through intertwined nerve process -not topologically. It connects point-to-point fitfully within the volumetric space of the brain, not topologically. These cell assemblages act as units which “provide for spatial integration [projection] of the transmitted activity.” The cortices generate dendritic potentials…arising from synaptic interactions of millions of neurons. They share “a spatially coherent oscillation… by which spatial patterns of amplitude modulation are transmitted in distinctive configurations… The neurons sharing the macroscopic, aperiodic oscillations comprise a local neighborhood that can be viewed as an equivalence class.” (Freeman, 1994, my emphasis) These “equivalence classes” thereby provide a non-contiguous spatial distribution onto the physical space of the brain. These spatially extensive and intertwined complexes of cells throughout the cortex achieve the connectivity that mere parallelism, (or hierarchy), cannot.  Freeman shows us how a topological mathematical space can be mapped onto the specifically physical space of the brain.  But that particular physical space is determined by its specific connectivity -by evolution and ontogeny, not representation.  Determined by genetics and learning, (ontogeny), it has the ability to connect specific process “ad hoc”.  It has the ability to self-organize on principles other than topological ones.

“The local neighborhoods corresponding to cortical columns and hypercolumns seldom have anatomical boundaries of their internal synaptic connections, so that an area of cortex composed of hundreds and even thousands of neighborhoods can act as a coherent element of function in generating a spatially coherent carrier wave. These distributed neural populations are dynamically unstable and are capable of very rapid global state transitions [which can] easily fulfill the most stringent timing requirements encountered in object recognition.” (ibid).

He argues against the proposal of “phase locking”, often cited as an explanation of cortical function, (of the “binding problem”), because of the scarcity of periodically oscillating neurons in the cortex. “They are found occasionally, but, in general conform to a Poisson distribution” –i.e. random statistical output. Also, because of the paucity of time-lagged correlations between pairs of neurons, “the definitions of ‘phase locking’ and ‘phase coherence’” cannot be made. They can only be derived for discrete frequencies. Whence, then, comes the “phase” in “phase locking”? On another tack, he argues against this conception because of the “slow onset and long duration of bursts of synchrony in comparison to the rapidity of object recognition.” “In a typical perceptual event lasting on the order of a tenth of a second, a participating neuron ...has time to fire only once or twice if at all.” “Phase locking” is only meaningful with repetition, (i.e. the actual existence of a “phase” to begin with).

Freeman concludes: “The transform effected by the output path defines the self-organized macroscopic activity as the cortical ‘signal’.” “In brief, ... the central code cannot be the same as the peripheral code.”(Freeman, 1994, my emphasis) He argues ultimately that the brain is a self-organizing entity, specifically obeying the laws of Chaos theory, (“Chaos can make as well as destroy information!”). I am frankly unqualified to judge this aspect of his argument. His physiological case: i.e. the connectivity of the CNS, however, is entirely sufficient in itself to demonstrate the kind of mapping, the broadest logical potential of “schematic GUI’s” and their explicit relevance to cognition. This model actually does “cross party lines”. I don’t think his physiological argument is answerable. This is how the brain and specifically, the cortex, actually works.

That the brain is, in fact, “self-organized” is exactly the case I am making. I argue that it is self-organized specifically for optimal efficiency, (i.e. urgency / risk), not for reference. Freeman’s case, I believe, constitutes an actual instance demonstrating the deepest possibilities of “schematic models”. It demonstrates the possibility of a truly useful model organized on non-topological principles, and, as such, demonstrates the deepest capabilities of a schematic GUI. This is not just “a level of abstraction.”

But where, accepting Freeman’s description of the actual brain, do these cell assemblages, (these “equivalence classes”), come from, and what is their function? How do these particular entangled arrays of cells, interconnecting and overarching ”the less than one percent of the neurons lying within the radius of contact”, arise? I propose that they arise evolutionarily –as internal, blind organizations of function. This is exactly what we would expect the organizing principle of a “self-organizing” metacellular entity to be. {21} Representation is neither required, nor, accepting Freeman, is it possible in cortex. This is what we would expect if neural organization were modeled on efficiency over “truth” -and how. Our “percepts”, moreover, are what we would expect if we joined the loop of output to input!

 “In particular, Maurice Merleau-Ponty in "The Phenomenology of Perception" [2] conceived of perception as the outcome of the "intentional arc", by which experience derives from the intentional actions of individuals that control sensory input and perception. Action into the world with reaction that changes the self is indivisible in reality, and must be analyzed in terms of "circular causality" as distinct from the linear causality of events as commonly perceived and analyzed in the physical world."  W.J. Freeman, 1997 {22}

 

2.4.2 AN EXPLICIT MODEL OF THE MIND:

  If we turn our perspective around and think of our (input) topographic maps as the looping, re-entrant extension of our output, then we can clearly see them, (and their “objects”), in their specific role as organizing artifacts of cortical function itself.  Our “percepts” are just the combined-in-one icons previously described in the “engineering” argument!  They are the “A-D”, (“analog/digital”), converters, so to speak, of the reentrant loop of process. {23}  This is what we would expect taking “percepts” as expressly schematic objects of process. These are what we would expect to see!  (see Figure 9)  I propose that our cognitive interface lays precisely in the topobiological models themselves, mediating between an unknowable externality and the optimized functionality of the cortex.  I claim that this constitutes an explicit and non-representational model for the mind.

Figure 9

GOD’S EYE?
(Edelman -to Freeman -to Edelman!)

Freeman’s model exposes a new paradigm for models. It also exposes the possibility of a new correspondence with reality. We want to believe that our knowledge of reality is direct –or at least parallels that reality. How could it be otherwise? How could a model be other than “an abstraction” and still be useful? Moreover, what is the evolutionary rationale for all of this? Modern science says that what truly is, absolute reality, (or “ontology” to use an old but precise word), consists of some ultimate particles: atoms or subatomic particles, quarks, etc. We are allowed to retain our normal view of reality within this view however because we envision our ordinary objects, (baseballs, you, me, the sun, etc.), as spatial containers, (and logical, theoretical hierarchies), in the new absolute reality we are forced to believe in. We may still preserve the sense of our ordinary objects as physical and logical clusters, (hierarchies), of those deeper existences. I can think of myself as a cluster of atomic particles and fields shaped like me, doing all the things I do, and positioned in ontic reality next to other things and persons just as I ordinarily see myself. There is a necessary belief in a continuity, and a contiguity, (“next-to-ness”), in this belief system. This is the “hierarchy” or “logical containment” implicit in the Newtonian World and it is mirrored in the hierarchies of contemporary mathematics and of logic. Truly modern science says otherwise, however. Quantum theory and Relativity say that the world, (reality), is an even stranger place. Freeman’s conclusions, moreover do not allow it at all. If we live anywhere, we live in cortex.

ON CHURCHLAND:

“At some point in evolutionary history, nature performed a “good trick”. It allowed for an internal representation of environment…. and this allowed competence in the larger world.” (P.S. Churchland, paraphrase)

But look at the reality. Somehow two neurons joined together to form an input/output loop. In a metacellular organism, how could nerves themselves arise however? And why? A nerve is a specialized cell that communicates from one area of the metacellular to another. How could it interpose itself between the parallelism of input and the parallelism of output? Where and how could it begin? As a correspondent of mine noted:  “It is clear that mollusks and ants, indeed all the ‘lesser’ animals exist and function in this world and it is further clear that most of them have nowhere near the neural capacity necessary for a representative model.  Perhaps it is time to turn our perspective around.”

If we turn the question around however, it makes more sense. Suppose there arise, (by mutation), some (blind?) processes that are evolutionarily valuable. This is where it makes sense to attach connections -for internal coordination, but those connections must be dictated by efficacy, (survival). At some point we might add a topological parallelism, but only as a modulator of the core process. This is our world, not God’s. We do not and cannot have a God’s eye view.

3. THE FORMAL AND ABSTRACT PROBLEM:

3.1 THE FORMAL ARGUMENT

Consider, finally, the formal and abstract problem. Consider the actual problem that evolution was faced with. Consider the problem of designing instrumentation for the efficient control of both especially complex and especially dangerous processes. In the general case, (imagining yourself the “evolutionary engineer”), what kind of information would you want to pass along and how would you best represent it? How would you design your display and control system?

It would be impossible, obviously, to represent all information about the objective physical reality of a, (any), process or its physical components, (objects). Where would you stop? Is the color of the building in which it is housed, the specific materials of which it is fabricated, that it is effected with gears rather than levers, -or its location in the galaxy- necessarily relevant information? (Contrarily, even its designer’s middle name might be relevant if it involved a computer program and you were considering the possibility of a hacker’s “back door”!) It would be counterproductive even if you could as relevant data would be overwhelmed and the consequent “calculus”, (having to process all that information),  {24} would become too complex and inefficient for rapid and effective response. Even the use of realistic abstractions could produce enormous difficulties in that you might be interested in many differing, (and, typically, conflicting), significant abstractions and/or their interrelations. {25} This would produce severe difficulties in generating an intuitive and efficient “calculus” geared towards optimal response.

For such a complex and dangerous process, the “entities” you create must, (1) necessarily, of course, be viable in relation to both data and control -i.e. they must be adequate in their function. {26}  But they would also, (2) need to be constructed with a primary intent towards efficiency of response, (rather than realism), as well -the process is, by stipulation, dangerous! The entities you create would need to be specifically fashioned to optimize the “calculus” while still fulfilling their (perhaps consequently distributed) operative role!

Your “entities” would need to be primarily fabricated in such a way as to intrinsically define a simplistic operative calculus of relationality between them -analogous to the situation in our training seminar. Maximal efficiency, (and safety), therefore, would demand crystallization into schematic virtual “entities” -a “GUI”- which would resolve both demands at a single stroke. Your objects could then distribute function, (in a “global / cortical mapping”), so as to concentrate and simplify control, (operation), via an elementary, intuitive calculus. These virtual entities need not necessarily be in a simple (or hierarchical -i.e. via abstraction) correlation with the objects of physical reality however. {27}  But they would most definitely need to allow rapid and effective control of a process which, considered objectively, might not be simple at all. It is clearly the optimization of the process of response itself –i.e. a simplistic “calculus”- that is crucial here, not literal representation. We, in fact, do not care that the operator knows what function(s) he is actually fulfilling, only that he does it (them) well!

3.2 THE SPECIFIC CASE OF BIOLOGY

Biological survival is exactly such a problem! It is both especially complex and especially dangerous. It is the penultimate case of complexity and embodies a moment-by-moment confrontation with disaster. It is a schematic model in just this sense that I argue evolution constructed therefore, and I propose it is the basis for both the “percept” and the “mind”. But it is just the converse of the argument made above that I propose for evolution however. It is not the distribution of function, but rather the centralization of disparate atomic biological function into efficacious schematic -and virtual- objects that evolution effected while compositing the complex metacellular organism. (These are clearly just the complementary perspectives on the same issue.)  {28}

But let’s talk about the “atomic” in the “atomic biological function” of the previous statement. There is another step in the argument to be taken at the level of biology. The “engineering” argument, (made above), deals specifically with the schematic manipulation of “data”. At the level of primitive evolution, however, it is modular (reactive) process that is significant to an organism, not data functions. A given genetic accident corresponds to the addition or modification of a given (behavioral/reactive) process which, for a primitive organism, is clearly and simply merely beneficial or not. The process itself is informationally indeterminate to the organism however -i.e. it is a modular whole. No one can presume that a particular, genetically determined response is informationally, (rather than reactively), significant to a Paramecium or an Escherichia coli, for example, (though we may consider it so). It is significant, rather, solely as a modular unit which either increases survivability or not. Let me therefore extend the prior argument to deal with the schematic organization of atomic, (modular), process, rather than of primitive, (i.e. absolute), data. It is my contention that the cognitive model, and cognition itself, is solely constituted as an organization of that atomic modular process, designed for computational and operational efficiency. The atomic processes themselves remain, and will forever remain, informationally indeterminate to the organism.

The evolutionary purpose of the model was computational simplicity! The calculational facility potentiated by a schematic and virtual object constitutes a clear and powerful evolutionary rationale for dealing with a multifarious environment. Such a model, (the “objects” and their “calculus”), allows rapid and efficient response to what cannot be assumed, a priori, to be a simplistic environment. From the viewpoint of the sixty trillion or so individual cells that constitute the human cooperative enterprise, that assumption, (environmental simplicity), is implausible in the extreme!

But theirs, (i.e. that perspective), is the most natural perspective from which to consider the problem. For five-sixths of evolutionary history, (three billion years), it was the one- celled organism which ruled alone. As Stephen Gould puts it, metacellular organisms represent only occasional and unstable spikes from the stable “left wall”, (the unicellulars), of evolutionary history.

“Progress does not rule, (and is not even a primary thrust of) the evolutionary process. For reasons of chemistry and physics, life arises next to the ‘left wall’ of its simplest conceivable and preservable complexity. This style of life (bacterial) has remained most common and most successful. A few creatures occasionally move to the right... “

“Therefore, to understand the events and generalities of life’s pathway, we must go beyond principles of evolutionary theory to a paleontological examination of the contingent pattern of life’s history on our planet. ...Such a view of life’s history is highly contrary both to conventional deterministic models of Western science and to the deepest social traditions and psychological hopes of Western culture for a history culminating in humans as life’s highest expression and intended planetary steward.”(Gould, 1994)


3.3 RETRODICTIVE CONFIRMATION

Do you not find it strange that the fundamental laws of the sciences, (or of logic), are so few? Or that our (purportedly) accidentally and evolutionarily acquired logic works so well to manipulate the objects of our environment? From the standpoint of contemporary science, this is a subject of wonder -or at least it should be. (cf contra: Minsky, 1985) It is, in fact, a miracle! {29} From the standpoint of the schematic model, however, it is a trivial, (obvious), and necessary consequence. It is precisely the purpose of the model itself! This is a profound teleological simplification.

3.4 CONCLUSION, (SECTION 3)

Evolution, in constructing a profoundly complex metacellular organism such as ours, was confronted with the problem of coordinating the physical structure of its thousands of billions of individual cells. It also faced the problem of coordinating the response of this colossus, this “Aunt Hillary”, (Hofstadter’s “sentient” ant colony). {30} It had to coordinate their functional interaction with their environment, raising an organizational problem of profound proportions.

Evolution was forced to deal with exactly the problem detailed above. The brain, moreover, is universally accepted as an evolutionary organ of response, (taken broadly {31}). I propose that a schematic entity, (and its corresponding schematic model), is by far the most credible possibility here. It can efficiently orchestrate the coordination of the ten million sensory neurons with the one million motor neurons, {32} -and with the profound milieu beyond. A realistic, (i.e. representational /informational), “entity”, on the other hand, would demand a concomitant “calculus” embodying the very complexity of the objective reality in which the organism exists, and this, I argue, is overwhelmingly implausible. {33}

Figure 10


4. THE CONCORDANCE: BIOLOGY’S PROPER CONCLUSION

Now I will move to what I think is the most important purely scientific implication of the combination of this and my paper, (“Consciousness, a Simpler Approach...”, Iglowitz, 2001). I call it “the concordance”. In “Consciousness”, I argued that the objects of mind are solely virtual. I argued that they are logically and implicitly defined by the axioms of brain function. I believe this line is profoundly explanatory for the deepest dilemmas of mind as we normally conceive it. In the present paper, I have argued another course -that the objects of mind are schematic artifacts. They are optimizing metaphors, artifacts integrating primitive brain process.

Now I propose the biological argument which relates the two themes. By identifying the “rule” of the brain, which, accepting Cassirer’s conclusions, (Cassirer, 1923), specifies a distinct logical concept -with the rule of “structural coupling” of the human organism, (after Maturana and Varela’s profound characterization of biological response), then “mind” may now reasonably be defined as the “concept”, (/rule), of the brain. Given that the rule is of the specific structure of my extended concept however, (i.e. the concept of implicit definition - my second hypothesis, Iglowitz, 1995), then mind becomes the specifically constitutive concept of the brain in the sense of Immanuel Kant, and not an ordinary concept. It is a concept necessary to -inbuilt into- our cognition, (in the exact sense that Kant used the word), not one imposed upon it. It is not something with which we conceive; it is, rather, the “we” which conceives. Following the arguments of my prior paper, it implicitly defines and therefore knows its “objects”.

Combining the results of the two papers, I now assert a concordance. I claim that their conclusions are commensurable. “Consciousness” made the case that it is only by considering our mental objects as operative logical objects, as objects implicitly defined by the system, that the wholeness and the logical autonomy of sentiency becomes possible. Referential objects do not convey the same possibility. The present paper has made the case that it is only as virtual and metaphorical objects, artifacts of the system of control, that the profound difficulties of the integration of megacellular response may be overcome. Again, referential objects do not convey the same possibility. The “objects” of each thesis are thus solely objects of their systems! The objects of the first, purely logical and cognitive thesis are thus commensurable with the objects of the second, purely biological and operative thesis. The discovery of such correspondences has always been crucial in the history of science.

But biology affirms the correlation. Modern day biology necessarily must reduce logic itself! From an evolutionary perspective, human logic must itself be taken as a strictly biological, evolutionarily derived rule of response, (broadly conceived –see {31}). So too must the concepts and categories contained within it. Logic can no longer be taken as “God-given”, or “God-knowledgeable”. Such mysticism is not compatible with the perspective of modern science. It is more than plausible, therefore, for biology to identify that human “logic”, (that bio -logic -and the “implicit definition” resident within it), with the rules governing the “objects” of the cognitive GUI of the present paper. “Mind”, as the constitutive concept of that bio-logic then, is the biological interface: the constitutive, holistic, and logical, (i.e. bio -logical), expression of the human organism’s organization of response. This conclusion restores “mind” as we normally understand it to biology and enables a science of mind.

This, the biological perspective of the concordance, I maintain, is the logical and proper biological perspective on the whole of the mind-brain problem. It is where biology must ultimately come to stand. The special significance of the “concordance” for neuroscience is that it finally enables a viable perspective within which biological and specifically neural process might be scientifically correlated with the actual specifics of the mind under evolutionary and operational paradigms. The latter, moreover, remain the most productive heuristic principles in contemporary biology. It opens the prospect of a physical description of mind itself.

Our perceptual objects are not objects in reality; they are the implicitly defined logical objects, (alternatively, clearly now, operative objects), of this constitutive logic. They are objects of process.


5. PLAIN TALK:


Let’s talk loosely for a bit. We do not start with absolutes anywhere in our logical and scientific endeavors. Somewhere we start with beliefs. I, for one, believe that I have a mind and a consciousness in the naive senses of those words. I think most of you believe that you do too. By this we do not just mean that our bodies mechanically and robotically produce words and actions which “cover the territory” -which merely simulate, (substitute for), sentiency in our naive sense of it, but that there is some universal and unified existence which is aware. But how?

The solution I propose lies in the combination of the concepts of implicit definition, virtual existence -and logic as biology. This is the only model within our intellectual horizons that seems to hold even any promise for sentiency in our ordinary sense of it. It suggests the only scientifically plausible solution to “the mind’s eye” and the “Cartesian theatre” and the only non-eliminativist answer, (for “mind”), to the homunculus problem. But these are answers which must exist if mind in our ordinary sense is to be real. Implicit definition permits knowing, (as a whole), what are, in some real sense, our distinct and separate parts precisely because those parts, (objects), are in fact non-localized and virtual (logical) expressions of the whole. It opens the first genuine possibility, therefore, for a resolution of this essential requirement of “naive” consciousness.

But that pathway, (implicit definition), does not make sense from the standpoint of representation! Implicit definition solves the problem logically -from the standpoint of constitutive logic -and speaks to nothing other than its own internal structure. “Objects”, (under this thesis), are known to a system, (i.e. universally/globally), only because they are specifically expressions of the system. It becomes a viable and natural solution to the problem of awareness, therefore, only when the objects of consciousness themselves are conceived operationally and schematically, (and specifically, logically), rather than representatively. When our objects are taken as specifically schematic representations of process however, (as per the present paper), the solution becomes both natural and plausible. The logical problem of sentiency is resolved.

How could evolution organize -as it had to organize- the reactive function of this colossus of sixty trillion cells? Even this formulation of the question disregards the yet more profound complexity of the reactivity of the individual cells -also organisms- themselves! It was the overwhelmingly crucial issue in the evolution of complex metacellulars. My thesis of schematism is both viable and plausible in this context.

But what does this evolutionary development and organization of the reactive process of complex metacellulars have to do with “information”?
That the progressive evolutionary reactivity of this megacollosus occurred under the bounds of real necessity is, of course, a given. It is the basic axiom of Darwinian “survival”. But that it could match that possibility -i.e. that it could achieve a (reactive) parallelism to that bound -i.e. “information!” -is a hypothesis of quite another order and teleologically distinct. It is, I assert moreover, mathematically immature. Objective reality is a bound to the evolutionary possibility of organisms, but under that bound infinitely diverse possibilities remain. I may, as a crude metaphor for instance, posit an infinity of functions under the arbitrary bound Y = 64,000,000. I may cite semi-circles, many of the trigonometric functions, curves, lines ... ad infinitum. Only one of these matches the bound, and only a specific subset, (the horizontal lines Y = a, a <= 64,000,000), parallels it. It is a question of the distinction between a bound and a limit. (See Figure 11)

The reactive evolutionary actuality of an organism certainly exists within, (and embodies), a lower bound of biologically possibility. But that some such, (any such), organism, (–to include the human organism!), embodies a greatest lower bound -i.e. that it, (or its reactivity), matches and meets, (or parallels, i.e. knows!), the real world does not follow. It is incommensurate with the fundamental premise of “natural selection” and stands as the “parallel postulate” of evolutionary theory. Organisms do not know, organisms do! Organisms survive!

How much more plausible, is it not, that the primary and crucial thrust of evolution was coordination, and specifically a coordination of allowable or appropriate, (rather than “informed”), reactive response? I submit that from a biological perspective the schematic object is far more plausible than the representative one. It involves no “magic”, and is totally consistent with our deepest conceptions of biology.

I submit that no other viable, (i.e. non-eliminative or non-dualistic), explanation, -an actual explanation rather than a prevarication, has ever even been offered for mind and consciousness as understood in our ordinary sense. The argument, then, is one of demonstration. If no truly viable alternative can be offered, then this one must be considered seriously.

The operational process of brain, (and its evolutionarily determined structural optimization), I argue, implicitly defines its “objects”, its “entities” in the same sense and in the same manner that the “process” of an axiom system implicitly defines its “objects”. The “objects of perception” are “intellectual objects”. They are (constitutive) conceptual objects. But those, in turn, are schematic objects, (alternatively, “operational objects”), only, in no necessarily simple correspondence with objective reality. They are metaphors of response.

Figure 11 –an Illustration of Bounds and Limits


(1) and (3) represent the best and the least possible performance for an organism over the domain of its behavior in absolute (ontic) reality. Less than (3) results in lessened survivability or death; greater than (1) is impossible as it is perfect performance with perfect knowledge in actual reality. Between the two bounds, “adequate performance” , ( (2), (2’), (2”), ...) need not match, nor even parallel these outer bounds. [Note: 2’ and 2” parallel 1, but 2 does not! ] Any curve within them is consistent with evolution. Edelman, for instance, talks about the multiple, non-derivative antibody responses to a given antigen. The same must surely apply to cognition itself, another “recognition system”, (using Edelman’s terminology). Cognition and response must be adequate, but it is not obvious that there is only one way -a mirroring way. Nor is it inherent that all ways be commensurate! An organism’s performance in its environment is measured, fundamentally, not in perfection or in rationality, but in simple adequacy. It is very easy to envision multiple, noncommensurate, blind-though-adequate responses to a given situation. It is not easy to envision rational responses informed by information.



APPENDIX, (FREEMAN AND AUTOMORPHISM)

An aside: a fascinating quote from Freeman, (it rings strong “bells” in my head)!

“Some people turn to chemicals as a way to deepen the privacy within solipsistic chasms, and in order to retreat from social stress into inner space. A few have induced these states so as to peer through the solipsistic bars and dirty windows in order to see what is ‘really there’, although, as minds disintegrate, what comes are swirls and tinglings, and ultimately the points of receptor inputs like stars, flies or grains of sand.” (Freeman, 1995, my emphasis)

Freeman and I have the same problem -in our innate resistance to the consequences of our own nonrepresentationalism. I too have wrestled with the “points” of sensory input -“like stars, flies or grains of sand”. The conclusion I have reached however is that our “points” are, in fact, primitive, atomic, (unspecified) process, not information. From the simpler perspective of ordinary biology, this is more obvious. These processes, (i.e. pragmatic and adequate, but not informational processes), are the necessary basic building blocks of biological cognition. These are our “points’. The difficulty lies in the automorphism we presume in cognition itself, and this is not an easy problem.

How can science continue to make new, profound discoveries?  How can the level of verifiable intricacy continue to multiply, seemingly without bounds within the legitimate confines of science?  How can the various branches of science continue to integrate and resolve themselves within one comprehensive picture?  How could, and why does statistics in fact work?  These are the real and crucial questions that a non-representational conception of mind must address.

 

The fact that the overall picture is getting better –that it is completing itself- does not in itself invalidate the hypothesis that it is non-representational however.  Nor does its overwhelming level of intricacy.  To answer the objection, let me reiterate a counter question from my book.  Is it not possible that we, like a swarm of bees, are merely building, (completing), a “hive”, (our worldview)?  We may be completing our interface with externality, but it does not follow at all that that interface is representational.  What does follow is that it is the most efficient one possible within our context.

We presume that our science maps back, (automorphically), onto the very model we visualize.  But the path of the automorphism we seek, I propose, lies through the very “gears and levers” of the original evolutionarily derived topobiological cognitive model itself, (re-using its "objects") -through another iteration –in another re-entrant mapping which supplies the mechanics and the transformation (back into Freeman's non-topological dispersive mapping into the overall brain) that we seek. I propose that reafferance within the loop of brain function combines with input from outside the loop, (passing through the environment), to yield a consistent, compound map which either does, or does not confirm our theoretical constructs.  Nowhere does this conception demand the absolute (ontic) reality of those constructs, however.  It is a reuse of our evolutionarily pragmatic (cortical) objects, (like Rosch's prototypes??), saying nothing whatsoever about the real (external) world in which we live. 

Why is this an important advance in our perspective?  Because it allows the use of my second hypothesis of "implicit definition" in a legitimate scientific context.   ("A Shortcut to the Problem: Consciousness per se ! ").  That second thesis enables, for the very first time, legitimate scientific conceptions of the most fundamental aspects we demand for "mind" itself:  i.e. a "Cartesian Theatre", the elimination of the problem of the "homonculus", and "knowing" per se.  These are not trivial consequences.

Thus microscopy, anatomy, biology, physics … is fed through the same interface to yield an image --of the body of another being or of our own, for instance, or the nature of our environment.  But the"objects" are functions of the interface itself, not of an external ontology.  This, I believe, is the mechanics of the automorphism we seek –i.e. the one processed by the brain, using its own transformation and mapping back onto its own map reusing the "objects" of that map. It is Edelman plus Freeman plus Merleau-Ponty and back to Edelman.  It already exists.  (The automorphism can be skewed by the intent of the model however –i.e. it can be processed to a different purpose.)

 We do not need yet another map, (e.g. Lehar, 2003), inside the brain to accomplish our purpose.  As Dreyfus notes: “… the human world is the result of this energy processing and the human world does not need another mechanical repetition of the same process in order to be perceived and understood.”  (Dreyfus, 1992, P.268)

(The whole of this discussion is nonsense, of course, in the absolute form within which it is stated.  Does our feedback really preserve parallelism in the absolute form I have proposed?  It is a valid statement within a context, but in an absolute ontological sense these are things we can never truly know.  A proper formulation must await the introduction of a completely new philosophical perspective -i.e. that of Cassirer's Philosophy of Symbolic Forms which I detailed in Chapter 4 of my book, (“Virtual Reality: Chapter 4”).  This supplies the rigorous, (and biologically necessary), scientific epistemological relativism required by the parameters of the problem.)

 

 

GOD’S EYE?

 

                                      Edelman to Freeman to Edelman

                                      ----------------------------------------     = Epistemological Relativism!

                                               (DIV Merleau-Ponty)

Quoting Freeman:

“To explain how stimuli cause consciousness, we have to explain causality.  [But] We can’t trace linear causal chains from receptors after the first cortical synapse, so we use circular causality to explain neural pattern formation by self-organizing dynamics.  But an aspect [a key aspect] of intentional action is causality, which we extrapolate to material objects in the world.  Thus causality [as far as humans are concerned] is a property of mind, not matter.” (Freeman, 1999)

Where is the world outside? What is the world outside? Freeman describes his stance as “epistemological solipsism”. I understand his rationale, but let me suggest something else. As realists, we necessarily accept the actual existence of an external reality, (as does Freeman), but the fact is we can never know it. Instead of epistemological solipsism, (which is circular ontological language at best), let me suggest another characterization: i.e. ontic indeterminism. We must accept the existence of externality, but, as biological organisms, there is not even a possibility that we may ever know it. We can never attain a “God’s eye view”. There is a good side to this, however. If we accept the existence of other beings as well, (as I think both you and I do as intentional belief), then we are not limited to enclosing them hierarchically. We are not obliged to limit them to their “properties”. Who is old or young? Who is black or white? Who is crippled or sound? Who is beautiful or ugly?  What is the possibility and the “soul” of man?

I have made a point in another writing that I think is worth repeating here. I argued, (Iglowitz, 1995), that it is not important that the “operator” of such a complicated process knows what it is, (specifically), that he is doing It is important only that he does it well. It is crucially important that he does it diligently, however. It is imperative that he be locked into the loop of his virtual reality -that he “pay attention”. This introduces the necessity of an inbuilt realistic imperative -i.e. a mechanical guarantee of his dedication, (see Hume). The universal and dogmatic belief in the simple reality of our natural world is thus itself a consequence of my thesis -and the greatest obstacle to its acceptance!

Speaking of falsifyability, consider Dennett’s “Color Phi” from our new perspective. Here is a case where the mental content is falsifiable under the standard interpretation. And yet it exists -it has been confirmed repeatedly. What else follows? Phantom limbs, blindsight? Are these not clear examples, falsifying the standard paradigm, (i.e. representationalism), and easily incorporated into the converse picture of a virtual mind?

CONCLUSIONS:

This paper, by itself, does not answer the questions of consciousness.  I do claim it as a valid biological perspective and part of the solution however.  It is important at this early stage because it enables my next crucial hypothesis:  i.e. that of "implicit definition".  That hypothesis finally offers an explanation of the profoundest problems of mind, per se.  It finally elucidates Leibniz' profound problem:  "How is it possible for the one to know the many?"  It answers it by finding that "the many" are, in fact, part of "the one".  The logic of brain implicitly defines our objects because they are operational objects.  This is how we are able to know them!  This is the ground of the "Cartesian Theatre" and finally lays the "homunculus" to rest.  (See "Consciousness: a Simpler Approach to the Mind-Brain Problem" ).  But implicit definition as a solution to these problems makes sense only in an operational system, not an informational one.

But still we are not at the end of our quest.  There still remain two more critical steps.  The first is an examination of what any kind of knowledge per se could possibly be.  Ernst Cassirer proposed that all knowledge is axiomatic.  Otherwise stated, it is all hypothesis and organization, (and, of course, commensurate with experience).  His brilliant conclusion was to realize that there could be many beginnings, many organizations, and that the comprehensiveness of a given theory did not preclude the comprehensiveness of another.  What it leads to is a conclusion of the indeterminacy of our absolute understanding of the world around us, (ontic indeterminacy).  But this is just what we would expect of the biological organisms we both understand ourselves to be.

This frustrating conclusion actually leads to the proper ground for an understanding of "mind" however.  That ground lies in the realization of our basic realist posture itself -our belief system.  It is what we, as realists, absolutely refuse to give up and which is innately incorporated in any theory we will countenance.   Putnam,  Lakoff and Edelman, (and Kant himself),  propose three basic tenets of scientific realism.  They are:

 
(1) “A commitment to the existence of a real world external to human
beings
(2) a link between conceptual schemes and the world via real human
experience; experience is not purely internal, but is constrained at every
instant by the real world of which we are an inextricable part
(3) a concept of truth that is based not only on internal coherence and
“rational acceptability”, but, most important, on coherence with our constant
real experience
(4) a commitment to the possibility of real human knowledge of the
world.”  (I differ with this postulate for obvious reasons.)

But I propose a further postulate, (elaborating on the innate sense of the three first above).  I propose the actual ontic existence of an "interface" between the "real world" and "experience" however–consistent with Freeman’s conclusions, for instance.  It is the actual substance of this “interface” that I propose is the substance of the mind.  (Cassirer places strong limitations on our description of this interface however.)  My third hypothesis is to assume that this interface is structured in the same way as the brain and experience, (my first and second hypotheses).   All the other substantive problems are answered in my first and second hypotheses.  Thus it follows that we are, (this interface is), live, we are, (this interface is) conscious, and we, (as minds), do exist!

 

|HOME|

 


 

REFERENCES :

Cassirer, E. (1923). Substance and Function and Einstein’s Theory of

                             Relativity. (Bound as one: translation by William

                             Curtis Swabey). Open Court.

Cassirer, E. (1953). The Philosophy of Symbolic Forms. (Translation by

                    Ralph Manheim).  Yale University Press.

Dreyfus, H. (1992). What Computers Still Can’t Do. MIT Press.
Edelman, G. (1992). Bright Air, Brilliant Fire. BasicBooks.
Freeman, W.J. (1994) Chaotic Oscillations and the Genesis of

                             Meaning in Cerebral Cortex

Freeman, W.J. (1995). Societies of Brains . Lawrence Erlbaum

                             Associates, Inc.
Freeman, W. J. (1997) / Sarafatti, J.
  Comments on “Constructing a

                             Conscious Android”

                              http://www.qedcorp.com/pcr/pcr/freeman1.html

Freeman, W.J. (1999). Consciousness, Intentionality, and Causality.

                             Journal of Consciousness Studies, Dec. 1999

Gould, S. J. (1994). The Evolution of Life on the Earth.
Hofstadter, D. (1979). Goedel, Escher, Bach . Vintage.
Iglowitz, J. (1995). Virtual Reality: Consciousness Really Explained.

                             Online at www.foothill.net/~jerryi

Iglowitz, J. (1996). The Logical Problem of Consciousness . Presented

                              to the UNESCO “Ontology II Congress”.

                             Barcelona, Spain.
Iglowitz, J. (2001). Consciousness: a Simpler Approach to the

                             Mind-Brain Problem  (Implicit Definition, Virtual

                             Reality and the Mind) Online at

                             www.foothill.net/~jerryi

Lakoff, George (1987). Women, Fire and Dangerous Things. 

                             University of Chicago Press

Lehar, Steven (2003) Gestalt Isomorphism and the Primacy of Subjective

                             Conscious Experience: A Gestalt Bubble Model

                             Behavioral and Brain Sciences. (Paper in Process)
Maturana, H. and Varela, F. (1987). The Tree of Knowledge. Shambala Press.
Minsky, M. (1985). The Society of Mind. Touchstone.
Smart, H. (1949). Cassirer’s Theory of Mathematical Concepts in The

                   Philosophy of Ernst Cassirer , Tudor Publishing.  

ENDNOTES:

1. See Iglowitz, 1995

2. See Iglowitz, “Consciousness, a Simpler Approach…”, 2001 and Iglowitz, 1995

3. See Iglowitz, “Consciousness, a Simpler Approach…”, 2001 for the logical problem, and Iglowitz, 1995, Chapters 3 & 4 for the epistemological problem and a summary of Cassirer’s thesis.

4. Maturana and Varela, 1987

5. Webster’s defines “calculus”: “(math) a method of calculation, any process of reasoning by use of symbols”. I am using it here in contradistinction to “the calculus”, i.e. differential and integral calculus.

6. a classroom is a kind of training seminar after all!

7. Is this not the usual case between conflicting theories and perspectives?

8. Dennett’s term “heterophenomenological” -i.e. with neutral ontological import -is apt here.

9. Edelman, 1992, pps. 236-237, his emphasis.

10. Iglowitz, 1995, especially Chapter 4

11. together: the possible conceptual contexts

12. c.f. the arguments of Chapters Two and Four for a detailed rationale

13. c.f. Iglowitz, 1995: “Afterward: Lakoff/Edelman” for a discussion of mathematical “ideals” which bears on this discussion.

14. this relates to the issues of “hierarchy” which I will discuss shortly

15. Their designers are the “lecturers”, and the instruments they design are the “objects” of their schematic models

16. A Couple of other lesser but still useful Schematic Models : A “war room”, (a high-tech military command center resembling a computer game), is another viable, though primitive, example of a schematic usage. It is specifically a schematic model, expressly designed for maximized response. The all-weather landing display in a jetliner supplies yet another example.

17. Cf Lakoff, 1987. Also see Iglowitz, 1995, “Afterward: Lakoff, Edelman…”

18. Edelman, 1992

19. The multiple, topological maps in the cortex

20. in the brain’s spatial maps

21. See Maturana, 1987 and Edelman, 1992

22  My function, however, is to introduce a mechanics –which I have done.  Merleau-Ponty is not “my philosopher”, but the concept seems pregnant.

23  This is, at best, a crude metaphor –but it crystallizes the idea nicely.  A more apt characterization would be “topological / non-topological” converters.

24. cf Dennett, Dreyfus on the “large database problem”

25. This is typically the case. A project manager, for instance, must deal with all, (and often conflicting), aspects of his task -from actual operation to acquisition, to personnel problems, to assuring that there are meals and functional bathrooms! Any one of these factors, (or some combination of them), -even the most trivial- could cause failure of his project. A more poignant example might involve a U.N. military commander in Bosnia. He would necessarily need to correlate many conflicting imperatives -from the geopolitical to the humanitarian to the military to the purely mundane! Or, in a metaphor on the earlier discussion, he might need to take a “Marxist” perspective for one aspect of his task, and a “royalist” perspective for another!

26.  Simple adequacy is quite distinct from information or parallelism however.

27. See Iglowitz, 1995: Lakoff/Edelman appendix for a discussion of abstraction and hierarchy

28. See Birkhoff & Mac Lane, 1955, p.350, discussion of the “duality principle” which vindicates this move. More simply put, and using Edelman’s vision, it is a question of which end of the “global mapping” we look from!

29. The “anthropic principle” is clearly self-serving and tautological.

30. cf Hofstadter, 1979. His is a very nice metaphor for picturing metacellular existence.

31. Freeman has objected to my characterization of the human brain as an “organ of response”.  I understand his objection, as it seems to imply acceptance of “stimulus-response” causality” –which is clearly not my intention.  At this level of discussion, I think the characterization is warranted however.  See Iglowitz 1995, Chapter 4 for a full and better treatment of this issue, and the Appendix to this article for a rationale.

32. Maturana and Varela, 1987

33. See Dreyfus on the “large database problem”. Also see Appendix A of Iglowitz, 1995 for a “combinatory” counterargument.