The Deprojective Theory: The Basics

This post is a part of a series on the deprojective theory.

It is something of a platitude that science does or ought to provide us humans more than mere empirical data. Rather the ideal, it is so said, is for science to provide us with knowledge. There are—of course—complications of nuance with any such bromide, but nevertheless philosophers also like to ask, ensuingly, what scientific knowledge consists in. Scientific knowledge: what is the real thing? One candidate is that science explains rather than merely reports. That is, explanation animates scientific knowledge. But, then, what is explanation?

Explanation is itself hard to explain. Many theorists have maintained that explanation is a matter of giving answers to why-questions. These differ from answers to other interrogations like what-questions, which are largely the domain of reporting and description and how-questions, which are the domain of engineering and artistry. But there are still complications. For example, think about Jimmy McNulty, the homocide detective in David Simon’s anti-hero in his series, The Wire, standing over the corpse of man killed in a vacant apartment building while next to a Baltimore coroner. They might question here

Why did this man die?


Both, in their own way, are scientists. But they seek different questions. And the answers which will satisfy them, therefore, will naturally be different. Probably the coronoer is asking whether the man died of apoxia, whereas Jimmy asks who murdered him.

In a coming work, The Foundations of Microeconomic Analysis, I characterize what I claim is the fundamental explanatory methodology of neoclassical microeconomics and, in particular, general equilibrium theory. I call the methodology analogical-projective explanation. The explanatory regime is based on a more general epistemological phenomenon which I call a deprojection. I focus here on what the actual mode of explanation rather than its applications.

Both of these are terms of art, but there is really nothing new about them other than the terminology. Analogical-projective explanation belongs to a family of theories of explanation known collectively (and perhaps misleadingly) as deductivist theories. The term is misleading because they are meant to cover explanatory methods like deduction and induction.

What do they have in common? All such regimes of explanation require that when someone is explaining to you, say, what is so-and-so, that someone makes an inference where so-and-so is the conclusion. Why is the sky blue, you ask me? Here is a deduction which explains it: every time light passes through a scattering medium, light is scattered in an inversely proportional way to its wavelength. Every wavelength of light becomes more visible the more it is scattered. For all visible wavelengths of the spectrum (i.e. those not including x-rays and television waves, etc), the family of blue-violet has the shortest wavelength. The preceding three claims are what the philosopher C.G. Hempel called universally-quantified natural laws (or laws of nature). When we add the following plain assertion, that the sky is a scattering medium, along with the rules of the predicate calculus, should yield the conclusion the sky is blue. In this case, the portion of the deduction not including the conclusion serves as a explanation for the conclusion.

Such explanatory inferences can be carried out in other inferential systems. Outside of predicate logic, Hempel spent much time focused on regimes of statistical inference for the purposes of induction.

Image
Figure 1. Some deprojectionists, surely. (Royalty-free. Vault Editions. )


Continue reading “The Deprojective Theory: The Basics” »

Einstein, Podolsky, and Rosen

This post is a part of a series on physical theory.

In 1935, Einstein, Podolsky, and Rosen published a paper in the Physical Review entitled “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” The paper constitutes part of the foundation for Einstein’s now well-known skepticism about the entrenched probabilism of quantum mechanics, which is sometimes summarized by the quote often attributed to him that God does not play dice with the universe.

EPR (as they are now called) do not propose experimental results but rather a thought experiment and a philosophical argument. In a particularly vivid variant of their thought experiment due to Bohm (1951)—now sometimes called the EPR experiment—two particles, an electron and a positron, separate at a point of radioactive decay. According to the experiment, they speed off in separate directions in a way where their total spin is zero. Along a chosen axis each of the particles can take one of two real values ( and ). For the observable property of spin along this axis, I will write for short “.”

According to the formalism of quantum mechanics, systems describing these two particles, individually, can be formulated as Hilbert spaces—that is, complex valued inner product spaces—and the representation of the spin observable along the chosen axis is represented in the formalism by a Hermitian (self-adjoint) operator. The observable values for the operator are then the eigenvalues of such operators (and since they are Hermitian, these values will be real numbers). Corresponding to each eigenvalue, there is an eigenspace. As a fact of linear algebra, there is a projection operator which takes any complex valued vector in the space and projects it onto the eigenspace defined by the th eigenvalue. For an observable , I will write the projection operator as “” and its th eigenvalue as “.” Likewise its projection operator for this eigenvalue will be written “.”

States of the positron system and the electron system are given by complex valued vectors. Part the mystery of quantum mechanical systems is that in general there is no certain answer to experimental questions like “Does the electron have value of spin ½ in the direction given that the state is ?” According to the Born rule, that answer is given by a probability value which is obtained from the projection operator on the eigenspace defined by the eigenvalue . So if the state is , then the probability of observable having value its th eigenvalue is given by the following formula:



Continue reading “Einstein, Podolsky, and Rosen” »

Levels of Measurement and Cardinal Utility


A few weeks ago, I was having a chat with Todd and some others in the office and it was in the conversational mix that cardinal utility had the property of preserving “intervals.” It was occasionally also mentioned that such utility representations were closed under “linear transformations.” I was confused by the discussion and at first I didn’t know why. On my walk home that day, I remembered I had heard those sorts of claims before. I typically think of a linear transformations as any mapping from a vector space to a vector space , both over a field , with the following properties of linearity:

  1. if is in the field then for , ;
  2. if , then .


For example, the equation is a linear transformation from the vector space of the set of reals back into itself. So, . When we speak of the algebra on in one dimension, is the underlying set for the vector space as well as the field.

Note that has the first property; suppose for example that and . Then



It also has the second property; for example, let and ; then



But clearly, this linear transformation does not preserve intervals:



I didn’t think I could be wrong about my understanding of the conventional use of the term “linear.” I thought maybe what people mean instead of “linear” in this context is that cardinal utility was closed under the class affine transformations. That is, the class containing all transformations of the following form (where is a scalar value in the field of and ):
Continue reading “Levels of Measurement and Cardinal Utility” »

Strong Homomorphisms and Embeddings


Homomorphisms are usually defined as structure preserving mappings from one model to another. Representation theorems are taken to establish the existence of a homomorphism between a qualitative first-order structure endowed with some empirical relations and some sort of numerical first order structure. The classic example is in the measurement of hedonic utility using introspection. In that case, we describe the axiomatic conditions ensuring the existence of a mapping from a structure to the structure of reals with their standard ordering . Here is meant to be a (usually finite) set of alternatives or choices and the relation is meant to encode the introspectively accessible relation that something feels better than something else.

I have been thinking that this notion of a homomorphism was exactly the same as in model theory but it turns out there are some subtleties. In model theory, usually we say that if we have two structures and in the same signature (which is a set of constant symbols, relation symbols, and function symbols), then a homomorphism from , the domain of , to , the domain of is a function satisfying the following conditions:

  • (i) For any constant symbol in , is
  • (ii) For any -ary function symbol in and ,
  • (iii:a) For any -ary relation symbol in and ,


The superscripts here indicate how the symbols are interpreted in the respective structures with objects or -tuples of objects. The conditions taken together are less demanding conditions than what is usually meant, it appears, in the theory of measurement. Here, we replace the third condition with

  • (iii:b) For any -ary relation symbol in and ,



In model theory, a mapping satisfying (i), (ii), and (iii:b) is called a strong homomorphism. The condition ensures that if two objects, for example, are not -related in then the will remain unrelated by in the mapping to .

If we add the condition that , a strong homomorphism, is an injection then the map is usually called an embedding. Likewise if a strong homomorphism is a bijection then it is an isomorphism.

The von Neumann-Morgenstern Elephant


I was telling Richard, Ko-Hung, and Joe yesterday about a paper I remembered by Oskar Morgenstern entitled The Collaboration Between Oskar Morgenstern and John von Neumann on the Theory of Games. It’s a nice piece on the writing of one of the economist’s most sacred texts, The Theory of Games and Economic Behavior first published in 1944. The book, spanning nearly 700 pages, was composed over several years, often in von Neumann’s house in Princeton. Morgenstern writes that the meetings were so frequent, von Neumann’s wife, Klari was distressed by the “perpetual collaboration.” He says that to mollify Klari, an avid collector of elephant trinkets, they promised to include a diagram with an elephant for her:

There were endless meetings either at my apartment over the bank or at 26 Westcott Road, where Johnny lived with his wife Klari and his daughter Marina (now Mrs. Marina von Neumann Whitman). We wrote virtually everything together and in the manuscript there are sometimes long passages written by one or the other and also passages in which the handwriting changes two or three times on the same page. We spent most afternoons together, consuming quantities of coffee, and Klari was often rather distressed by our perpetual collaboration and incessant conversations. She was at that time collecting elephants made of ivory, glass, and all sorts of other material. At one point she teased us by saying that she would have nothing more to do with the ominous book, which grew larger and larger and consumed more and more of our time if it didn’t also have an elephant in it. So we promised we would happily put an elephant in the book: anyone who opens the pages can find a diagram showing an elephant if he knows that he should look for one.


I took some time to look for this Easter egg and it is on page 64 (of the third edition) in Figure 4:

Image