Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

July 18, 2008

Reconceptualizing underrepresentation

I've spent the past week reading Iris Marion Young's Justice and the politics of difference, and the last day or two thinking about this thread on the underrepresentation of women in STEM (Science, Technology, Engineering, and Mathematics) disciplines. (I should say `some STEM fields' or just give the list -- physics, engineering, computer science, pure mathematics, economics, and philosophy -- but I'm just going to use the easier-to-type version and let you know right here that I'm not talking about biology and chemistry, at least with respect to women.) Putting these two thinkings together, I feel the need to reconsider the basics of underrepresentation: the problem is not what we think it is.

We usually think of underrepresentation as a problem of numbers: STEM fields are `disproportionately' male, or white, or straight, or able-bodied, etc. As evidence, we cite facts about the percentage of tenured faculty who are female or people of colour or queer or disabled, etc. (To cut down on the number of lists like these, I'm going to primarily use the generic `group X'. Substitute in your favourite oppressed social group for the X.)

But such facts are insufficient to establish a claim that group X is underrepresented in STEM. In order to establish this claim, we need the normative claim that the percentage of tenured faculty who are members of group X in a STEM field should be approximately such-and-such, or in the range from here to there. And that should causes problems, because (at least in part) no-one seems to have given much systematic through to how we're going to determine what comes after it, or even whether it's the right sort of claim. Nor is it at all clear what's backing up that normative claim: why should the percentage of group X-tenured faculty in a STEM field should be between a and b?

So, instead, I'd like to suggest another (or a novel) way of understanding underrepresentation. On this approach, underrepresentation is a matter of epistemic injustice (I'm drawing on Miranda Fricker here, too). More formally, underrepresentation is about the way certain groups are deprived of access to certain communities of epistemic and political power and prestige. (Namely, STEM communities.) As I see it, there are two aspects to this deprivation:
  1. whether members of group X have the same access to membership in the prestigious community as members of other groups, and
  2. whether the members of group X are oppressed within the prestigious community.

To put these in terms of the underrepresentation of women within STEM, the first aspect concerns whether women can get into the laboratory -- as a group, do they have opportunities to become scientists, engineers, and mathematicians? The second concerns their status within the laboratory -- does the STEM community treat women justly?

With just this much, we can make some preliminary observations. First, the feminist underrepresentation claim is that women are oppressed within certain STEM disciplines. The corresponding opposition claim is that women are not oppressed within STEM. Second, numbers tell us something, but not everything. This parallels injustice in the broader society: looking at the race and gender of poverty can tell us something, but it doesn't give us a complete understanding of economic, racial, or gender injustice. Third, examining the satisfaction of (Humean) preferences (as John Tierney did in the NYT article that inspired the FPh post) is only relevant to the extent that the satisfaction of (Humean) preferences is significant in the theory of justice we are using to analyse underrepresentation as epistemic injustice. Unless you're a utilitarian, this will be at best only somewhat significant -- again, it won't give us a complete picture.

I mentioned Young's book up in the first paragraph. Young has two major goals in this book: to challenge what she calls the `distributive paradigm' in mainstream political philosophy and theory, and to articulate in its place an alternative paradigm of injustice as oppression. The last paragraph parallels the first task: my observations indicate that a statistical examination of the distribution of resources and positions of power gives an incomplete picture of underrepresentation. This distribution is certainly relevant to issues of epistemic justice, but not the whole story.

Young lays out her alternative approach to justice as a taxonomy of forms of oppression. Borrowing this taxonomy as our background theory of justice, we can re-articulate the feminist claim of underrepresentation as epistemic injustice: women are exploited, marginalized, powerless, and sometimes subject to cultural imperialism and violence within STEM.

1. Women are exploited as research assistants, technicians, instructors, test subjects, secretaries, janitors, and other assistants and support staff to primary investigators and tenured faculty. Their work, both creative and menial, is appropriated by and benefit PIs and tenured faculty. I say `women are exploited as research assistants', for example, and not `research assistants are exploited' because these lower status positions are disproportionately held by women or held by women who are less likely to receive eventual returns on their sacrifices than their male colleagues (the leaky pipeline effect).

2. Women are marginalized and powerless is similar ways. (Young is not, to my mind, entirely clear on the difference between these two.) Assistants and support staff to PIs and tenured faculty have little or no power to make decisions about the research they will participate in or how they will participate in it. They might be able to choose whether or not to participate at all, or suggest new directions, experiments, research strategies, etc., but have no real power to shape the course of research. Similarly, instructors and teaching assistants have little or no discretion over their courses -- they are assigned by their superiors to teach this class or that, on a term-by-term basis, and usually based on the department's need to cover teaching duties perceived as menial or boring by tenured faculty (Physics 101, the early calculus sequence, remedial classes, etc.). The content of these courses is usually dictated by official standards and texts or all the sections of a course being linked to a standardized midterm and final.

3. Cultural imperialism and violence, Young's last two categories of oppression, are less common than the first two (treating marginalization and powerlessness as one category for the moment), but still issues of injustice within STEM than need attention. Cultural imperialism refers to the widespread acceptance of stereotypes and biassed perceptions of marginalized groups. In the context of STEM, this would mean the acceptance of scientific theories with, say, sexist content and implications. Uncontroversial historical examples abound -- Stephen Jay Gould's The mismeasure of man has some jaw-dropping ones. More controversial are contemporary theories of, for example, gender- and race-linked differences in the distribution of IQ and problem-solving abilities and still-prevalent `active male/passive female' models of fertilization. (In this post, I'm going to remain neutral on the question of whether any of these theories was epistemically acceptable in its heyday, or is epistemically acceptable today. Perhaps there are genuine dilemmas of epistemic justice.)

4. Violence is exactly what you would expect. I suspect -- though I could be wrong -- that violence is a pervasive or systematic problem only in medical and pharmacological treatment and research, and not other STEM fields. (Although perhaps medicine and pharmacology should not be classified as STEM fields at all.) Violence in medicine is closely linked with race and class, and with powerlessness: women of colour and living in poverty have been subject to forced sterilization as recently as the 1970s, they are less likely to receive proper medical care and due respect for their autonomy as patients, and so on.

Similar observations apply to racial groups, and of course disability. (I suspect there's a whole thesis to be written on the ways disabled people are the victims of violence within medicine.)

I have one final observation. One especially persistent feminist criticism of STEM fields with underrepresentation problems (I've heard this about philosophy, physics, mathematics, computer science, and economics) is the prevalence of a macho, aggressive, or `duelist' culture (the phrase is Janice Moulton's) that is supposed to drive many women away from these fields. In such a culture, one is supposed to be a vigorous and aggressive defender of one's views in a such argumentative context; the thought is that this creates a great deal of competition, effectively weeding out the weakest (and, presumably, therefore untrue) ideas. Traditional feminine attributes of pleasantness and self-abasement create a catch-22: either women cannot adopt these aggressive norms (they conflict too much with the way they have been taught to behave), or they are punished and disparaged by their male colleagues for being too aggressive.

This is, I think, an extremely important criticism. But I'm not sure where to place it in Young's taxonomy of oppression. Perhaps one side of the dilemma is marginalization -- women who do not adopt the aggressive stance are denied standing within the community -- and the other is cultural imperialism -- the successful imposition of masculine norms of behaviour on women.

February 29, 2008

K(p) & -K(K(p))

A fascinating short piece in Salon about some neuroscience work on knowing -- from the `aha!' moment when we recognise that an explanation fits the data to the certainty attached to deeply cherished beliefs. What's tricky here is that this self-conscious experience of knowing isn't, the author thinks, a rational or truly conscious process.

In his bestselling "Blink," New Yorker staff writer Malcolm Gladwell describes gut feelings as "perfectly rational," as "thinking that moves a little faster and operates a little more mysteriously" than conscious thought. But he's flying in the face of present-day understanding of brain behavior. Gut feelings and intuitions, the Eureka moment and our sense of conviction, represent the conscious experiences of unconsciously derived feelings.

Look at the feeling of knowing in the light of evolution. It explains how we learn. Compare it with the body's various sensory systems. It is through sight and sound that we are in contact with the world around us. Similarly, we have extensive sensory functions for assessing our interior milieu. When our body needs food, we feel hunger. When we are dehydrated and require water, we feel thirsty. If we have sensory systems to connect us with the outside world, and sensory systems to notify us of our internal bodily needs, it seems reasonable that we would also have a sensory system to tell us what our minds are doing.

To be an effective, powerful reward, the feeling of conviction must feel like a conscious and deliberate conclusion. As a result, the brain has developed a constellation of mental sensations that feel like thoughts but aren't. These involuntary and uncontrollable feelings are the mind's sensations; as sensations they are subject to a wide variety of perceptual illusions common to all sensory systems. Understanding this couldn't be more important to our sense of ourselves and the world around us.

February 24, 2008

Why I'm going to be a philosopher of science

This semester, I entered the orals stage of my Ph.D. programme. Sometime in the next 7-14 months, I will spend 90 minutes locked in a room with five faculty members, who will happily interrogate my understanding of a list of classical and recent works of philosophy. Should I pass, I move on, more or less immediately, to the final (and interminable) thesis-writing stage. Should I fail, I can try again before the end of my fourth year (next May); if I fail at that point, or don't manage to get a second try scheduled before the end of the year, I get kicked out of school.

Despite the immense pressure, what's nice is that I get to choose what's on the list. At least, to an extent. I pick the general area, and about 2/3 of the list is standard for that general area; I get to decide what goes on the remaining 1/3. So, roughly speaking, everything on the list are things I'm interested in. I don't have to read any metaphysics or Plato or Scotus if I don't want to.

But I've had some problems picking the general area. I enjoy both philosophy of math and philosophy of science immensely, and while I don't think that they should be seen as two distinct areas of philosophy, trying to fit them both into a single orals list has proven impossible. On Thursday I met with one of my professors to get some help with figuring out the list. She gave me an ultimatum: Within two weeks, I have to decide whether I will be a philosopher of science who knows a bit about philosophy of math, or a philosopher of math who knows a bit about philosophy of science. She recommended I think about potential thesis projects and look through the major philosophy of math and philosophy of science journals to decide.

While I had some thesis projects in mind, they were hopelessly vague, and this is part of the problem I was having putting together a list. (One thing that's especially great about this particular professor: she never, ever lets me get away with any sort of hand-waving.) So I went and looked through journals and spent an hour writing down rough versions of thesis projects. Most of the projects I came up with were either impossibly difficult or, upon closer inspection, would be incredibly boring to actually work on in detail. The philosophy of math projects that looked both feasible and interesting came in two distinct flavours:

  1. The relationship between mathematics and society, especially the relationship between ethico-political values and `good mathematics'.
  2. The epistemological significance of certain features of the community of mathematicians, such as the underrepresentation of women and racial and ethnic minorities.


Philosophers of science are interested in these two things. But -- critically, and counterintuitively -- philosophers of math generally aren't. Philosophers of math, almost uniformly in this country, work on formal logic, the metaphysics of math (do numbers exist?), and foundations. Furthermore, what I discovered in the journals was that, to the extent that someone works on such things, they're published in philosophy of science and history of mathematics journals, and not published in philosophy of math journals. So, to work on one of these projects, I would have to read a lot of philosophy of science literature, and most of the philosophy of math literature would be completely irrelevant.

So I've made my decision. Strictly speaking, I will be a philosopher of science. Like most philosophers of science, I'll have a certain amount of expertise and affection for one science in particular -- mathematics -- and this will be reflected in what I write and teach. But I won't, strictly speaking, be a philosopher of math. My interests and research projects simply aren't the sort of things philosophers of math work on, at least in the English-speaking part of the world.

December 11, 2007

Values on both sides

Say you have a certain statistical tool S. You give a population a certain test, you crunch the resulting data, and out comes a number, one for each person. In certain cases R, the numbers line up in a normal distribution -- a lot of individuals clustered near the mean, smoothly tapering off towards the extremes.

Then you go and apply S to another population, P. Now things are way off -- the population's clustered around a much lower mean, maybe the distribution looks weird, and so on.

Suppose further that P is a significant part of the total population you're interested in measuring using S. Say, around 12%. And suppose, furthermore, that the mean of P is in the lower third standard deviation of the distribution for R.

Then you have to make a choice between two basic hypotheses: Either individuals in P are significantly lower than individuals in R in terms of whatever it is S is supposed to measure. Or P is a crappy way of measuring this whatever it is -- it's gotten the distribution spectacularly wrong.

How to decide between these two? Well, if you already think that individuals in P are significantly lower than individuals in R in terms of whatever it is S is supposed to measure, then you embrace that hypothesis, and don't really consider whether or not S is working right, because it does indeed appear to be working right.

But then, of course, you can't point to S to argue for the claim that individuals in P are significantly lower than individuals in R in terms of whatever it is S is supposed to measure. That would be blatantly circular.

Back in October, James Watson -- misogynist and Nobel laureate co-discoverer of DNA with Crick and Franklin -- said something rather racist:

[Watson] is "inherently gloomy about the prospect of Africa" because "all our social policies are based on the fact that their intelligence is the same as ours - whereas all the testing says not really", and I know that this "hot potato" is going to be difficult to address.

Pretty indefensible, right? How can anyone think that Africans are less intelligent than `us'? (I'm not even going to get into the assumption that the intersection of `us' and Africans is the empty set.)

Well, thank the internets, someone has risen to Watson's defence.

Watson's claim in his recent interview with Charlotte Hunt-Grubbe that intelligence testing shows lower scores in Africa than Europe is likewise, entirely supported by the scientific literature.

Then there are citations to a bunch of studies that all say about the same thing: on standard IQ tests, the mean throughout Africa is between about 65 and 75. That's apparently right around the current definition of `mental retardation'.

The situation is an instance of the one I described above. Test in Western nations, and you get a mean around 100 (by construction). Test in Africa, and you get a mean in the third standard deviation. Africa's population is about 888 million, which is around 12% of the world's 6 billion. These results are incompatible with a worldwide standard distribution of IQ -- if the standard deviation for the African data sets is the same as the standard deviation for the Western data sets, about twice as many people are in the third standard deviation as should be.

So we have to choose between two hypotheses: either Africans are less intelligent than Westerners (that's highly simplified, but again, bracketing the spectacularly racist assumptions here) or IQ tests are not a good way of measuring intelligence.

There is a long attempt in that post to argue for the negation of the second hypothesis -- that is, that IQ tests are, in fact, a good way of measuring intelligence. The argument comes down to two points: First, IQ test scores correlate with other IQ test scores, and second, IQ test scores correlate with economic success. The first is useless -- the fact that all these tests measure the same thing doesn't imply that they don't all measure the same artificial statistical construction. And the second only implies that IQ tests are a good way of measuring intelligence if economic success is a good way of measuring intelligence. And that's only plausible if you think that rich people are more intelligent than poor people. The possibility that economic status has an effect on IQ scores is never considered.

It all comes down to racism, classism, and the intersection of the two. If you already believed that Africans and poor people are less intelligent than `us', then you won't see any problems here. IQ tests must measure intelligence, because they get the results we expected for intelligence. If you think that the idea that Africans and poor people are less intelligent than `us' is offensive on its face, then you're as liable as before to think that this IQ test business is a load of crap.

Which makes this an excellent example of the way ethico-political values show up as background assumptions when reasoning from evidence to theory. We have the same sets of evidence, but different background assumptions concerning race, class, and intelligence. Based on these assumptions, we reason to very different -- indeed, incompatible -- theories. Without these background assumptions -- if our reasoning process was truly `value-free' -- we would have no way of reaching any conclusions beyond statistical correlations between different metrics.

However, where I am willing to embrace the role my values play in my reasoning, the defender of Watson, it seems, does not.

September 04, 2007

Polanyi's argument for the autonomy of science

Blogging took an unexpected break this weekend, as I was dealing with a personal crisis which, while not entirely dealt with (and will not be until, most likely, the end of the semester), is at least now manageable.

So let's do some straight-up Analytic philosophy. Only of science, because I still have no patience for metaphysics.

In his (apparently) famous piece `The republic of science' (1962), Michael Polanyi gives an argument for the autonomy of science. In particular, he's arguing that scientists ought to be allowed and encouraged, by society at large, to pursue whatever research projects they like, rather than the research projects that are felt to be likely to be useful to society at large. This is vague, and that reflects the fact that the conclusion of the argument is vague.

The particular argument I'm interested comes from page 61, for those following along at home. The terminology I use below is my gloss on Polanyi. (In other words, this is definitely a paraphrase.)

(1) Academic credit is an accurate indication of scientifically promising research programmes.
(2) Society ought to distribute grant money (and other material resources necessary for research) according to the scientific promise of research programmes.
(3) Therefore, society ought to distribute grant money according to academic credit.


This conclusion is to be contrasted with something like

(4) Society ought to distribute grant money according to projected social utility.

Polanyi's conclusion holds science to be autonomous (at least normatively) in the sense that the property determining the distribution of grant money is internal to the institutions of science (albeit not the sort of evidence-and-theory model of science of the positivists). It is not, in particular, the property of being expected to produce something of great value to society as a whole (eg, new technologies that will allow us to grow ten times as much corn).

One quick paragraph on `academic credit'. This term just refers to the level of esteem a research programme (or researcher) enjoys among the community of scientists. If everyone thinks nonlinear dynamics is going to be the Next Big Thing, then nonlinear dynamics enjoys a large amount of academic credit. On the other hand, if everyone thinks string theory is a complete dead end, then string theory enjoys very little academic credit. Polanyi is arguing that, in this situation, nonlinear dynamics should get lots of grant money (and grad students and lab space), while string theory should be basically cut off.

Another quick paragraph on `scientific promise'. This is supposed to be the promise (or, better, the potential or likelihood) of producting some new and valuable piece of pure knowledge. That `pure' is extremely important. Polanyi is operating with an implicit but very sharp division between science and technology (episteme and techne). Technology is a valuable product of scientific research, but is only an accidental product. The true product of scientific research is pure knowledge, and it is in this way that scientific research is valued `in itself'.

Now, I have a number of problems with this argument. First, and most trivially, we're not really given any reason to think (1) is true. But it's plausible, and there doesn't really seem any better way of measuring scientific promise around.

The blue whale that is the second problem is that (2) is begging the question, or coming so close to begging the question as makes no difference. With Polanyi's implicit distinction between science and technology, we can rewrite (4) as

(4') Society ought to distribute grant money according to technological promise.

If (4') is the anti-conclusion Polanyi is trying to reject, then (2) can't serve as a premiss. It's the conclusion he wants.

That is, if Polanyi's opponent is valuing science for its technological achievements, and not for its ability to produce pure knowledge, then of course the opponent is going to reject the premiss that grant money ought to be distributed according to the potential to produce pure knowledge. That, fundamentally, is just exactly where the disagreement lies.

Perhaps Polanyi intends (2) to be the real conclusion of the argument, and I'm just spectacularly misunderstanding him. But I don't see anything else that might serve as a premiss to get from (1) to (1).

August 09, 2007

Chaos, the stock market, and the ends of political philosophy

I just finished teaching a class on chaos theory to a group of 15 young people (ages 12-16). The subject is fascinating, and I highly recommend picking up a copy of James Gleick's Chaos for an non-technical and historical introduction. One of the major themes we have learned in the development of chaos theory over the past 40 years is that order and disorder are not, as traditionally thought, opposites. Gigantic, destructive hurricanes -- to pick a vivid example -- are caused by essentially the same processes that produce refreshing light rains.

One of the most important and interesting features of chaotic systems is a tendency for a system to transition between order and disorder spontaneously and without any outside influence. The same processes keep happening in the same way, and predictable stability suddenly turns into completely unpredictable randomness -- and then, just as suddenly, settles down into predictable stability again. Freakish weather -- Indian summers, snow in April and May (in the US) -- is a good example of this sort of thing.

I spent all day yesterday travelling, which means most of the news I saw was business news. And all the business news was going on and on about the turmoil in the stock market. And I was reminded that chaos theory has turned out to be rather successful at explaining the behaviour of stock markets. It's not that the abstract mathematics gives a causal account of how real economic events influence the way the wealthy trade money. Rather, those causal processes, whatever they are, have a mathematical structure that is accurately expressed using chaotic dynamical systems.

This means that stock markets, like other chaotic phenomena, will tend to alternate -- unpredictably and on all time-scales -- between periods of ordered and disordered behaviour. The current turmoil on Wall Street may be the result of exactly the same processes behaving in exactly the same way as two months ago. Predatory and self-destructive lending practices may have been an incidental factor -- more like the straw that broke the camel's back than a ton of bricks -- or may have even been completely irrelevant -- the system went from order to disorder entirely naturally, with no outside influences.

But we don't think in chaotic terms, at least not generally and not yet. When we see order turning into disorder, we think some external force has intervened in the system. And so we see pundits talking about this or that possible cause, without ever considering the possibility that there was no cause, that this is just the sort of thing that happens in chaotic systems.

Now, to get to something that's actually important. I don't want to try to formulate the definitions that would make empirical investigation actually possible here, but I do want to suggest that `political stability' may also be a chaotic dynamical system.

This hypothesis would have some interesting explanatory power. It would explain, for example, why the 1960s and '70s were so tumultuous, and our era is so placid.

But, for this post, I'm really interested in thinking about what implications this hypothesis would have for political philosophy. We tend to think of the just society as at least internally stable -- it has all its problems solved, so the only way instability could arise would be for some outside event to occur, like a drought or plague or alien invasion. Some political philosophers have even considered stability as the single most important issue for political philosophy -- Hobbes, for example, defends a totalitarian state on the grounds that it's the only way to guarantee stability, and Rawls makes the transition from A theory of justice to Political liberalism by realising that an ineliminable political pluralism will be a feature of any liberal democracy, and hence a potential threat to stability that must be dealt with.

But if this chaos hypothesis is right, then a perfectly stable society is a pipe dream. We may be able to achieve stability for a while, but it will never be completely ineliminable, and indeed will crop up unpredictably and spontaneously on all time scales. Even Fall of the Roman Empire-level disorder may be inevitable. And then political philosophers may be completely wasting their time, looking for the totally stable society.

We should therefore go after justice directly. A stable society is either impossible, or will be a secondary achievement of a truly just society. Perhaps we can even develop a conception of the just society that can survive a Fall of the Roman Empire-level disorder.

June 14, 2007

Mr. Wizard

I was never a huge Mr. Wizard fan, as I was more of a PBS kid after school, but it's hard to over estimate the impact he had on children's television, especially science shows like Beakman's World or Bill Nye The Science Guy. Actually, I'm not even aware what the current science-tv personalities are these days. I certainly hope there are some out there following in his footsteps.

Update: Changed the accidental "under estimate" to the intended "over estimate".