Our moral solar system
Today I spent some time digging into social psychologist Jon Haidt's work on moral foundations. It was time well spent. I've been thinking a lot lately, not writing much, but thinking a lot about harm. I've been thinking about different forms of harm; I've been thinking about our ability to perceive those various forms of harm; and I've been thinking about how our inability to perceive harm leads to even more harmful consequences.
And I've been thinking about the paradoxical challenge of explaining that mechanism to people who, apparently, do not perceive harm. I'm still working on that.
Meanwhile, Jon Haidt and his colleagues at the University of Virginia have been examining the foundations of morality that our species has evolved, along with the interactions of our evolved foundation with our cultural programming. Haidt's model, which I find useful, suggests five foundations of "intuitive ethics".
The first of the five is "Harm". That grabbed my attention. As I thought about the other four items in Haidt's list, I found myself thinking repeatedly,
I noticed, too, that Haidt and his associates acknowledge that their model may be wrong, even though they find it useful. That grabbed my attention, too, of course. They've cheerfully offered a reward to folks who may suggest improvements. In the fine print of their challenge they compare the task to identifying planets in our solar system, and to the challenge of distinguishing planets from other large rocks, with a nod toward former planet, Pluto.
Thinking about the solar system, about what constitutes a planet, and about what constitutes harm, led me to a modification of Haidt's description that works better for me. I think of the concept of Harm as the crucial, unifying component of Haidt's model of moral foundations. Harm, to me, represents the sun, the center of a moral solar system. The other four planets, Fairness, Loyalty, Respect, and Sanctity revolve around it. They all relate to Harm. They involve harm, in differing ways. Those other moral components are captured and orbit in the same system because Harm exerts far greater gravity upon them.
One aspect of harm, and the perception of harm, that I've been thinking about involves anticipation of future harm. Beyond recognizing a clear and present immediate danger, our ability to predict harmful consequences — or not — seems to me one of our greatest challenges as a society.
That predictive anticipation is one way that those other four components relate to harm as a central, unifying framework. Consider Fairness, for example. I hope it's obvious that it would be unfair to withhold food from me if I'm starving. But what about something smaller? What about when someone cuts in front of me in line? Or more often, while driving? On those occasions the annoyance of being treated unfairly usually passes quickly. The greater question for me, however, is to wonder what other ways a socially unfair driver or line-jumper might behave in the future? If I can't trust someone to make a wise, fair decision about a small action, how can I trust such a person to make a wise, fair decision about something bigger, something more threatening, something more harmful? It's the warning alarm about future harm that unfairness raises in my mind.
Haidt's group looks at loyalty in the context of groups. I'm not very good at group loyalty for its own sake. I'm a lousy sports fan. As a teenager, I never felt the appeal of "school spirit". Such arbitrary assignments of loyalty rarely include any sense of perceptible harm that inspires me. The well-being of the community where I live, however, affects my own well-being. My perception of current or future harm can, and does, inspire loyalty to my community.
Respect, for me, is another valuable predictor of future behavior. There may be immediate psychological benefit, or harm, if someone does or does not respect me. Like fairness, however, my greater concern is for the future. People who respect me now seem more likely to ensure my well being in the future. People who disrespect me now (by treating me unfairly?) seem more likely to cause me harm in the future. Lack of respect is a warning alarm about the potential for future harm.
The last component, sanctity (paired with "purity" in Haidt's model) is "shaped by the psychology of disgust and contamination." Physical contamination, such as polluted air, polluted water, or spoiled food, clearly relates to potential physical harm.
So when I think about Haidt's model, I see the concept of harm as an underlying theme. I see the concept of harm as a unifying framework. In our moral solar system, I see our ability to perceive harm in all its forms as the sun whose gravity binds other moral components into a cohesive system.
Am I saying Haidt's model is wrong? Not necessarily. I find it more useful to think about harm as the key component because that suits the persuasive work that I do. I encourage folks to see a bigger picture. It's been my experience that finding one key item to focus upon can help other people see a bigger picture. Placing the perception of harm at the center of our moral solar system works better for me, for what I want to do. What Haidt's group wants to do is examine the interaction of our moral foundation with our culture and with our public policy decisions. And they need to measure those interactions somehow. For their purpose, collapsing the entire model to emphasize harm might make it more difficult for them to measure the interactions they seek to describe. My model of our moral solar system may not work as well for them as it does for me.
Most models are wrong, but some are useful.
Some models are wrong and useful in different ways, at different times, for different people, with different purposes.
And I've been thinking about the paradoxical challenge of explaining that mechanism to people who, apparently, do not perceive harm. I'm still working on that.
Meanwhile, Jon Haidt and his colleagues at the University of Virginia have been examining the foundations of morality that our species has evolved, along with the interactions of our evolved foundation with our cultural programming. Haidt's model, which I find useful, suggests five foundations of "intuitive ethics".
The first of the five is "Harm". That grabbed my attention. As I thought about the other four items in Haidt's list, I found myself thinking repeatedly,
"Isn't that also a form of harm?"
I noticed, too, that Haidt and his associates acknowledge that their model may be wrong, even though they find it useful. That grabbed my attention, too, of course. They've cheerfully offered a reward to folks who may suggest improvements. In the fine print of their challenge they compare the task to identifying planets in our solar system, and to the challenge of distinguishing planets from other large rocks, with a nod toward former planet, Pluto.
Thinking about the solar system, about what constitutes a planet, and about what constitutes harm, led me to a modification of Haidt's description that works better for me. I think of the concept of Harm as the crucial, unifying component of Haidt's model of moral foundations. Harm, to me, represents the sun, the center of a moral solar system. The other four planets, Fairness, Loyalty, Respect, and Sanctity revolve around it. They all relate to Harm. They involve harm, in differing ways. Those other moral components are captured and orbit in the same system because Harm exerts far greater gravity upon them.
One aspect of harm, and the perception of harm, that I've been thinking about involves anticipation of future harm. Beyond recognizing a clear and present immediate danger, our ability to predict harmful consequences — or not — seems to me one of our greatest challenges as a society.
That predictive anticipation is one way that those other four components relate to harm as a central, unifying framework. Consider Fairness, for example. I hope it's obvious that it would be unfair to withhold food from me if I'm starving. But what about something smaller? What about when someone cuts in front of me in line? Or more often, while driving? On those occasions the annoyance of being treated unfairly usually passes quickly. The greater question for me, however, is to wonder what other ways a socially unfair driver or line-jumper might behave in the future? If I can't trust someone to make a wise, fair decision about a small action, how can I trust such a person to make a wise, fair decision about something bigger, something more threatening, something more harmful? It's the warning alarm about future harm that unfairness raises in my mind.
Haidt's group looks at loyalty in the context of groups. I'm not very good at group loyalty for its own sake. I'm a lousy sports fan. As a teenager, I never felt the appeal of "school spirit". Such arbitrary assignments of loyalty rarely include any sense of perceptible harm that inspires me. The well-being of the community where I live, however, affects my own well-being. My perception of current or future harm can, and does, inspire loyalty to my community.
Respect, for me, is another valuable predictor of future behavior. There may be immediate psychological benefit, or harm, if someone does or does not respect me. Like fairness, however, my greater concern is for the future. People who respect me now seem more likely to ensure my well being in the future. People who disrespect me now (by treating me unfairly?) seem more likely to cause me harm in the future. Lack of respect is a warning alarm about the potential for future harm.
The last component, sanctity (paired with "purity" in Haidt's model) is "shaped by the psychology of disgust and contamination." Physical contamination, such as polluted air, polluted water, or spoiled food, clearly relates to potential physical harm.
So when I think about Haidt's model, I see the concept of harm as an underlying theme. I see the concept of harm as a unifying framework. In our moral solar system, I see our ability to perceive harm in all its forms as the sun whose gravity binds other moral components into a cohesive system.
Am I saying Haidt's model is wrong? Not necessarily. I find it more useful to think about harm as the key component because that suits the persuasive work that I do. I encourage folks to see a bigger picture. It's been my experience that finding one key item to focus upon can help other people see a bigger picture. Placing the perception of harm at the center of our moral solar system works better for me, for what I want to do. What Haidt's group wants to do is examine the interaction of our moral foundation with our culture and with our public policy decisions. And they need to measure those interactions somehow. For their purpose, collapsing the entire model to emphasize harm might make it more difficult for them to measure the interactions they seek to describe. My model of our moral solar system may not work as well for them as it does for me.
Most models are wrong, but some are useful.
Some models are wrong and useful in different ways, at different times, for different people, with different purposes.
3 comments:
John Stewart Mill's idea of a "harm principal" has been twisted around by some sick people. As a social libertarian who believes the government has no right to impose on my decisions which cause no harm, like medical, recreational, or matrimonial, the idea of "future harm" has always seemed scary to me. It has been used by fascists and theocracies to justify laws against individuals whose "immorality" or "amorality" could be construed as contamination of the moral soul.
Further, the Bush Doctrine is based on this assumption of "future harm." I am always wary of anyone who claims to know what's going to happen in the future.
However, from an environmental POV, it is quite easy to say, hey, all that pollution coming out of your smoke stack is going to give some kid asthma or lung cancer, thereby causing us all physical, and financial harm, in the future. We know this because we have the science that proves it, unlike the theocrats who try to claim unprovable moralistic statements about the contamination they perceive being spread by abortion, euthanasia, or gay marriage.
great food for thought, ETBNC...as usual.
Some yardsticks do binarize a whole domain of discourse..."is X a good thing or a bad thing" [to which example, we start attaching context like "good for me, that is" which collapses the simplicity of the discussion into a chaos of relative merits of X]
My take is that people are quite selective in seeing more potential harm from some quarters and less from others...and events finally prove them wrong. witness our departing administration. the puzzle is why, when the thing that does someone in is not what they were guarding against, do some people refuse to change their position or even their perception about what the highest priority threats are? I reckon you are well on the trail when you associate models with the problem of harm. I am not sure what the reward is that makes some provable broken models to dear to part with but that reward is hard at work when you hear a guy like Cheney in last weeks interview talking as if the most important thing to worry about is still where the next Muslim terrorist going to strike. It is vastly more charitable to that creep for me to try to understand his "leadership" in terms of ego investment in a dumb model than to simply call him evil. Doesn't matter what we resort to for explanations of people who can pose us so much harm as Cheney has unless it yields an operative way to reduce the harm. I find it a horrible tangle of cross purposes when I try to think of our sea of troubles as misperceived threats and miscalculated responses to threats. [threat = "future harm", right?]
would you say that people who operate on hope of a reward or a profit of some kind for their efforts are fundamentally different from people whose efforts are focused around avoiding harm?
Interesting comments, Scott and GreenSmile. Thanks for your feedback.
I'll try to follow up when I have something coherent to add.
Cheers
Post a Comment