Interesting fact within regarding production per ton of greenhouse gas.
Getting Warmer By the Editors
There’s an international conference on global warming — the 14th Convention of the Parties to the Kyoto Protocol — under way in frozen Poznan, Poland. You’ll be excused for not having heard about it, because not much is happening, despite Al Gore’s triumphal entrance into the city, which may as well have occurred in a chariot. (“Many see him as a saviour,” reports Der Spiegel.) In Poznan, what is not happening is more significant than what is.
The Poznan meeting was supposed to prepare the way for a “Son of Kyoto” pact to be signed, sealed, and delivered at Copenhagen in December next year. Now even the chief of the U.N. Framework Convention on Climate Change, Yvo de Boer, has admitted that “under the circumstances, nobody expects a fully elaborated long-term response” in Copenhagen. The circumstances he refers to are the economic crises at present worrying the world.
In Europe, the financial turmoil has broken the stride of the EU’s lockstep approach to climate issues. Those with greater economic vulnerability — Italy, Poland, and much of Eastern Europe — refuse to accept a new climate deal, crafted by French president Nicolas Sarkozy, on the grounds that it will further damage their already fragile economies. German Chancellor Angela Merkel seeks exemptions for her country’s heavy industries. Italian environment minister Stefania Presciagiacomo pooh-poohs the idea that “green jobs” will transform advanced economies, scoffing, “Some people claim environmental measures are a way to re-launch industry. But let’s be realistic: Resources are limited, and they will be even more so because of the economic crisis.”
Meanwhile, developing countries remain adamant that they will not accept any new limits on their emissions in Kyoto II. This is fact of no little salience, given that China is today the world’s No. 1 emitter of greenhouse gases, followed by the United States, Indonesia, and India. (It’s worth keeping in mind what the U.S. gives the world along with those emissions: In the more relevant comparison, the ratio of economic production to greenhouse emissions, the United States is the best performer among this group by a very wide margin, producing $2,000 in economic value per ton of greenhouse emissions to China’s $450, India’s $497, and Indonesia’s $679. A ton of emissions from the United States brings the world 4.5 times as much economic good as a ton of emissions from China.) The developing nations are right to resist new limits because the affordable energy that fossil fuels supply is an important engine for lifting their people out of poverty. But without meaningful limits on developing-world emissions, greenhouse gas concentrations will continue to rise. That makes things awkward for Obama.
For eight years, the United States has been the object of criticism, much of it harsh and unfair, for its unwillingness to be afflicted with sweeping emissions limits and the punitive economic consequences that will go along with them. And now, the very same international parties that censured the United States for looking to its own interests have themselves become the agents of delay. This presents a quandary for the president-elect, who sat next to Gore and declared that “the time for delay is over,” and who famously declared that his ascension would constitute “the moment when the rise of the oceans began to slow and our planet began to heal.” Obama promised to submit to the global consensus on climate change, but that consensus no longer exists as an operational political fact.
This may yet work out well for Obama and his new environment team. One of the lessons of Kyoto was that imposing an international global-warming agreement upon independence-minded America was bound to fail. Obama now has opportunity to devise a domestic policy — likely some variant of a “cap and trade” regime — that he can take to the meeting after Copenhagen, in 2010, in hopes of inducing the other parties to follow his lead.
But if he is unable to secure the passage of new climate legislation — or if he is foolish enough to let the EPA proceed with its quixotic dream of circumventing Congress to regulate emissions itself by reinterpreting the Clean Air Act — Obama may find that even a 2010 deadline will come too quickly. The one thing we can be sure of is that the Poznan meeting will result in plans to meet again and talk some more, and that Al Gore and his acolytes will hail this as a historic achievement.
“SENTENCE first – VERDICT afterwards,” said the Queen. “Nonsense!” said Alice loudly. “Off with her head!” the Queen shouted at the top of her voice. – Alice In Wonderland
They say Lewis Carroll was a serious dope fiend, his mind totally scrambled on opium, when he concocted “Alice in Wonderland.” A place where the sentence comes first and the verdict afterward? Where people who protest the madness are sentenced to death themselves ?
Such a place rolled out the red carpet for Benicio del Toro this past weekend. I refer to Havana Cuba which put on the Havana Film Festival where the 4 ½ hour movie “Che,” was the main feature. In May del Toro won the Cannes Film Festiva’l’s “best actor” award for his role as Che Guevara in the movie he co-produced and Steven Soderbergh directed.
While accepting the “best actor” award at Cannes Benicio del Toro gushed: “I’d like to dedicate this to the man himself, Che Guevara!” as the crowd erupted in a thunderous ovation."I wouldn’t be here without Che Guevara, and through all the awards the movie gets you’ll have to pay your respects to the man!”
In a flurry of subsequent interviews in Europe del Toro equated Che Guevara with Jesus Christ and told a Spanish interviewer, “Ideologically I feel very close to Che.”
Alas, (outside Havana and Cannes) the movie has met with mostly scathing reviews. Variety’s Todd McCarthy branded the movie “defiantly nondramatic” and “a commercial impossibility.” New York Magazine calls it, “something of a fiasco.”
Soderbergh and Benicio Del Toro, actually had an intriguing and immensely amusing theme if only they’d known how to plumb it. Soderbergh hails Guevara as “one of the most fascinating lives in the last century.”
Almost all who actually interacted with Ernesto Guevara (and are now free to express their views without fear of firing squads or torture chambers) know that the The Big Question regarding Ernesto, the most genuinely fascinating aspect of his life, is: how did such a dreadful bore, incurable doofus, sadist and and epic idiot attain such iconic status?
The answer is that this psychotic and thoroughly unimposing vagrant named Ernesto Guevara had the magnificent fortune of linking up with modern history’s top press agent, Fidel Castro, who for going on half a century now, has had the mainstream media anxiously scurrying to his every beck and call and eating out of his hand like trained pigeons.
Had Ernesto Guevara De La Serna y Lynch not linked up with Raul and Fidel Castro in Mexico city that fateful summer of 1955--had he not linked up with a Cuban exile named Nico Lopez in Guatemala the year before who later introduced him to Raul and Fidel Castro in Mexico city — everything points to Ernesto continuing his life of a traveling hobo, panhandling, mooching off women, staying in flophouses and scribbling unreadable poetry.
Not to be outdone in the trained pigeon department, while making their film, Soderbergh and Del Toro repeatedly visited Havana to coo and peck away as anxiously as Herbert Matthews, Dan Rather or Barbara Walters while the regime tossed out its propaganda crumbs. Del Toro and Soderbergh , on top of relying on Che’s diaries (published in Havana by Cuba’s propaganda Ministry and edited by Fidel Castro who wrote the introduction) for the script, also obtained recollections from Che’s widow and many of his former underling executioners. These all currently serve as ministers in a totalitarian regime. “We wanted to show the real character” boasts Soderbergh. Absolutely no chance of any hanky panky with the historical record from these sources!
“I met him [Fidel Castro] for about five minutes,” Del Toro said. “He knew about the project and he said to me that he was very happy (I’ll bet!) that we had spent so much time researching the subject. And why shouldn’t Castro be ecstatic wth the film? Most of del Toro and Soderbergh’s “research” time was spent with Cuba’s propaganda Ministry.
“I’m here in Cuba’s hills thirsting for blood,” Che wrote his abandoned wife in 1957. “Dear Papa, today I discovered I really like killing,” he wrote shortly afterwards. Alas, this killing very rarely involved combat, it come from the close-range murder of bound and blindfolded men and boys.
“When you saw the beaming look on Che’s face as the victims were tied to the stake and blasted apart,” said a former political prisoner to this writer, “you knew there was something seriously, seriously wrong with Che Guevara.”
In fact the one genuine accomplishment in Che Guevara’s life was the mass-murder of defenseless men and boys. Under his own gun dozens died. Under his orders thousands crumpled. At everything else Che Guevara failed abysmally, even comically. Yet Soderbergh and Del Toro skip over these fascinating quotes and Che’s one genuine accomplishment as a revolutionary.
Alas, taking on Fidel Castro as agent has it’s drawbacks, as former colleagues all attest: “Fidel only praises the dead.” So prior to whooping up his revolutionary sidekick, Fidel Castro sent him “to sleep with the fishes.”
“Most of the people I met that knew him,” says Del Toro, “when they spoke about him, there was a sense that they were talking about a family member that they cared about with infinite love.”
Indeed, Fidel Castro’s expressions of love for his former sidekick must have misted Del Toro’s eyes.
Too bad Soderbergh and Del Toro didn’t interview the former CIA officers who revealed to this writer how Fidel Castro himself, via the Bolivian Communist party, constantly fed the CIA info on Che’s whereabouts in Bolivia. Including Fidel Castro’s directive to the Bolivian Communists regarding Che and his merry band might have also added drama. “Not even an aspirin,” instructed Cuba’s Maximum Leader to his Bolivian comrades, meaning that Bolivia’s Communists were not to assist Che in any way “not even with an aspirin,” if Che complained of a headache.
But utterly starstruck by their subject and slavishly compliant to Fidel Castro’s script and casting calls, all these fascinating plots and subplots flew right over Soderbergh and Del Toro’s heads.
Fidel Castro’s influence over the Western “intelligentsia” can only be described as magical, and renders any public evaluation of his regime among the smart set completely devoid of logic. To wit:
He jailed and tortured at a rate higher than Stalin and refuses (unlike Apartheid South Africa, Pinochet’s Chile and Somoza’s Nicaragua) to allow Amnesty International or the Red Cross to inspect his prisons. Yet Cuba sat on the U.N.’s Human Rights Committee and upon visiting New York as the U.N.’s keynote speaker in 1995, Newsweek magazine hailed Castro as “The Hottest Ticket in Manhattan!” and Time as “The Toast of Manhattan!” referring to the social swirl that engulfed him and the autograph hounds who mobbed him from among New York’s smart set.
His legal code mandates 2 years in prison for anyone overheard cracking a joke about him. Yet Jack Nicholson and Chevy Chase sing his praises.
He abolished Habeas corpus while his chief hangman (Che Guevara himself) declared that “judicial evidence is an archaic bourgeois detail.” Yet Harvard Law School invited him as their guest of honor, then erupted in cheers and tumultuous ovations after his every third sentence.
He drove out a higher percentage of Jews from Cuba than Czar Nicholas drove from Russia. Yet Shoah Foundation Founder Stephen Spielberg, considered his dinner with Fidel Castro, “the eight most important hours of my life.”
He’s a lily-white European soldier’s son who forcibly overthrew a Cuban government where Blacks served as President of the Senate, Minister of Agriculture, Chief of Army, and Head of State (Fulgencio Batista, the grandson of slaves, and born in a palm-roofed shack). Then jailed the longest suffering black political prisoner of modern history (Eusebio Penalver who suffered longer in Castro’s dungeon’s than Nelson Mandela suffered in South Africa’s). Today the prison population in Stalinist/Apartheid Cuba is 90 percent black while only 9 percent of the ruling Stalinist party is black. He sentenced other blacks (Dr. Elias Biscet, Jorge Antunez) to 20-year sentences essentially for quoting Martin Luther King in a public square. Yet he’s a hero to the Congressional Black Caucus and receives frequent accolades and even passionate bear hugs from Charles Rangel and Jesse Jackson.
He converted a nation with a higher per capita income than half of Europe, the lowest inflation rate in the Western hemisphere, a larger middle class than Switzerland and a huge influx of immigrants into one that repels Haitians. Yet, Colin Powell and the London Times, (owned by Rupert Murdoch) have recognized “the Castro Revolution’s achievements.”
In brief, except among “right-wing crackpots,” Cuba is ritually discussed, not with facts or reasoned observations, but with handy (and bogus) clichés.
Che Guevara’s delight in slaughtering Cubans was made possible only because these Cubans were completely defenseless at the time. Bound and blindfolded was his preference. And in that very manner they were lined up in front of his firing squads. In other settings featuring firearms (held by others) the troubled Argentine quivered with fear.
On Oct. 8 1967, for instance, upon finally encountering armed and determined enemies, Che quickly dropped his fully-loaded weapons. “Don’t shoot!” he whimpered. “I’m Che! I’m worth more to you alive than dead!”
For some reason del Toro and Soderbergh’s movie omits this scene.
Spare the Rod, Spoil Society Does punishing free riders increase long-run cooperation?
Ronald Bailey | December 9, 2008 Want to punish the fat cats on Wall Street who have allegedly wrecked the economy? Of course you do! And it's only natural, according to research done by two economists whose work focuses on the puzzle of human cooperation. As Swiss researchers Ernst Fehr and Simon Gächter noted in 2002, "Unlike other creatures, people frequently cooperate with genetically unrelated strangers, often in large groups." They argued that one significant key to cooperation is the existence of "altruistic punishment." During the course of human evolution, people frequently engaged in cooperative activities such as big game hunting and the preservation of common property resources like fisheries. But it's all too easy for individuals to free ride on such projects. So how does true cooperation occur? Fehr and Gächter point to altruistic punishers: people who respond with strong emotion or even violence when someone else benefits from the labor of others without contributing something themselves. It may cost something to act as altruistic punisher, but going postal on non-cooperators does encourage everybody to contribute to the public good. The New York Times even speculated that this drive to punish free riders was behind the American public's disinclination to support the Congressional bailout proposals back in September.
To test their hypothesis, Fehr and Gächter set up a series of public goods experiments in which each player chose how much money to contribute to a joint investment—without knowing beforehand how much the other players will contribute. If everyone puts in a lot, they maximize their profits. However, the games were rigged such that non-cooperators could gain even more by taking a share of the profits while retaining their own initial endowments. The researchers found that when punishing non-cooperators was possible (say, spend $1 to reduce the free riders' endowments by $3), it substantially increased the amount that nearly all subjects invested in the public good. For example, in experiments done in 2000, where free riders could be punished, experimental subjects contributed two to four times more than when there was no punishment option.
However, Dutch experimenters noted that public goods games where punishment was allowed actually produced lower overall returns than did games in which no punishment occurred. Why? Because the destruction of the non-cooperators' resources was greater than the subsequent gains from cooperation. Punishment increases cooperation, but it also makes the group poorer. This is not a particularly inspiring outcome.
In a new study published last week in Science, however, Gächter and his colleagues show that the lower overall returns from games in which punishment is possible may be an experimental artifact resulting from the number of rounds in which the games are played. In most experimental conditions, public goods games are played for ten rounds or less. The new research compares the outcomes of games lasting for 10 rounds versus 50 rounds, both when punishment is possible and when it is not.
What happens when players have only ten rounds in which to invest? As satisfying as it is to punish free riders, the average payoffs after ten rounds are indeed lower than when no punishment is allowed. The results are quite different when the games last 50 rounds. Interestingly, the payoffs in the initial rounds when punishment is possible are lower than the payoffs when punishment can't occur. However, as rounds of play accumulate, the payoffs in the games where free riders can be punished rise rapidly, while the payoffs in games in which free riders are not punished drop throughout the duration of play.
Another happy result is that once players understand that they can be punished for free riding, they start investing enthusiastically in the common pool, causing the costs of punishment to drop to near zero. "Overall, our experiments show that punishment not only increases cooperation, it also makes groups and individuals better off in the long run because the costs of punishment become negligible and are outweighed by the increased gains from cooperation," conclude the researchers. "These results support group selection models of cooperation and punishment, which require that punishment increases not only cooperation but also group average payoffs."
So punishing free riders increases cooperation and boosts incomes over the long-run. But that is not always the case (at least in shorter-term games). Earlier this year, Gächter and his colleagues reported the results from a series of public goods games using players from 16 different societies. Their research turned up profound cross-cultural differences in response to punishment. All groups punished free riders, but the free riders did not all respond with increased cooperation. Instead, some sought revenge by punishing their punishers—if you whack me, I'll whack you, in other words. So a cycle of vendettas broke out.
For example, players from Muscat, Greece, and Saudi Arabia were the most vengeful. On the other hand, players from the United States, Australia, and Britain were the least vengeful and most likely to respond to punishment with increased cooperation. The researchers concluded that revenge is stronger among participants from "societies with weak norms of civic cooperation and a weak rule of law." Not surprisingly, the overall payoffs were significantly lower in the games in which participants indulged in cycles of vengeance.
We've come a long way from the bands of Pleistocene hunter-gatherers in which these psychological tendencies evolved. In today's complex economy, which encompasses globe-spanning webs of cooperation, how do people correctly identify free riders who merit punishment? Are the investment bankers with big bonuses free riders? What about hedge fund managers? Government agencies? Politicians? Perhaps the good news from experimental economics is that while Americans want to punish free riders as much as the next guys do, we are unlikely to engage in a self-defeating cycle of financial vengeance that will make us all poorer.
Ronald Bailey is reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.
So the big news today is that the governor of Illinois has been caught doing explicitly what most politicians do with more subtlety every single day: selling off their power to the highest bidder. I can't help but note that yet another politician is indicted on corruption charges at the very same time we are handing over unprecedented power to the political class as we partially nationalize the banking system and, apparently, the Big Three auto companies.
I simply do not understand how those who are in favor of giving government all of these new powers because they sincerely believe that doing so will work out the way their blackboard designs intended can keep a straight face. What kind of cognitive dissonance must it take to believe that the people YOU are handing power over to are "not like" Ted Stevens or Rod Blagojevich? How deeply must one be in denial or engage in rationalization to believe that they are "different?" How blind must one be to think that trillions of dollars in bailout money won't go to the highest bidder (as the lobbyists line up on K Street...) in a process different only in its wink-and-a-nod courtesies than Blagojevich's auctioning off of a Senate seat?
For me, the key insight of public choice is the same insight that underlies Austrian economics: it is the institutional framework that is the key to understanding the choices people make and the unintended outcomes they produce. As I said to a class last week: "Governments can't act like businesses because businesses only act like businesses because they operate in the institutional environment of private property, monetary exchange, and competition." In the same way, getting politicians to stop selling off their power isn't a matter of ethics or psychology, rather it's about changing the rules of the game such that they do not have as much power to sell. Unfortunately, the current bailout mania is changing those rules in utterly the wrong direction.
Look at it this way: the bailouts are already becoming just a legal form of the essentially the same behavior for which the governor has been indicted.
Silence=Acceptance Rabbi Holtzberg was not murdered because of a territorial dispute over Kashmir or because of Bush’s foreign policy.
By Mark Steyn
Shortly after the London Tube bombings in 2005, a reader of Tim Blair, the Sydney Daily Telegraph’s columnar wag, sent him a note-perfect parody of a typical newspaper headline: “British Muslims Fear Repercussions Over Tomorrow’s Train Bombing.”
Indeed. And so it goes. This time round — Bombay — it was the Associated Press that filed a story about how Muslims “found themselves on the defensive once again about bloodshed linked to their religion.”
Oh, I don’t know about that. In fact, you’d be hard pressed from most news reports to figure out the bloodshed was “linked” to any religion, least of all one beginning with “I-“ and ending in “-slam.” In the three years since those British bombings, the media have more or less entirely abandoned the offending formulations — “Islamic terrorists,” “Muslim extremists” — and by the time of the assault on Bombay found it easier just to call the alleged perpetrators “militants” or “gunmen” or “teenage gunmen,” as in the opening line of this report in the Australian: “An Adelaide woman in India for her wedding is lucky to be alive after teenage gunmen ran amok…”
Kids today, eh? Always running amok in an aimless fashion.
The veteran British TV anchor Jon Snow, on the other hand, opted for the more cryptic locution “practitioners.” “Practitioners” of what, exactly?
Hard to say. And getting harder. Tom Gross produced a jaw-dropping round-up of Bombay media coverage: The discovery that, for the first time in an Indian terrorist atrocity, Jews had been attacked, tortured, and killed produced from the New York Times a serene befuddlement: “It is not known if the Jewish center was strategically chosen, or if it was an accidental hostage scene.”
Hmm. Greater Bombay forms one of the world’s five biggest cities. It has a population of nearly 20 million. But only one Jewish center, located in a building that gives no external clue as to the bounty waiting therein. An “accidental hostage scene” that one of the “practitioners” just happened to stumble upon? “I must be the luckiest jihadist in town. What are the odds?”
Meanwhile, the New Age guru Deepak Chopra laid all the blame on American foreign policy for “going after the wrong people” and inflaming moderates, and “that inflammation then gets organized and appears as this disaster in Bombay.”
Really? The inflammation just “appears”? Like a bad pimple? The “fairer” we get to the, ah, inflamed militant practitioners, the unfairer we get to everyone else. At the Chabad House, the murdered Jews were described in almost all the Western media as “ultra-Orthodox,” “ultra-” in this instance being less a term of theological precision than a generalized code for “strange, weird people, nothing against them personally, but they probably shouldn’t have been over there in the first place.” Are they stranger or weirder than their killers? Two “inflamed moderates” entered the Chabad House, shouted “Allahu Akbar!,” tortured the Jews and murdered them, including the young Rabbi’s pregnant wife. Their two-year-old child escaped because of a quick-witted (non-Jewish) nanny who hid in a closet and then, risking being mown down by machine-gun fire, ran with him to safety.
The Times was being silly in suggesting this was just an “accidental” hostage opportunity — and not just because, when Muslim terrorists capture Jews, it’s not a hostage situation, it’s a mass murder-in-waiting. The sole surviving “militant” revealed that the Jewish center had been targeted a year in advance. The 28-year-old rabbi was Gavriel Holtzberg. His pregnant wife was Rivka Holtzberg. Their orphaned son is Moshe Holtzberg, and his brave nanny is Sandra Samuels. Remember their names, not because they’re any more important than the Indians, Britons, and Americans targeted in the attack on Bombay, but because they are an especially revealing glimpse into the pathologies of the perpetrators.
In a well-planned attack on iconic Bombay landmarks symbolizing great power and wealth, the “militants” nevertheless found time to divert 20 percent of their manpower to torturing and killing a handful of obscure Jews helping the city’s poor in a nondescript building. If they were just “teenage gunmen” or “militants” in the cause of Kashmir, engaged in a more or less conventional territorial dispute with India, why kill the only rabbi in Bombay? Dennis Prager got to the absurdity of it when he invited his readers to imagine Basque separatists attacking Madrid: “Would the terrorists take time out to murder all those in the Madrid Chabad House? The idea is ludicrous.”
And yet we take it for granted that Pakistani “militants” in a long-running border dispute with India would take time out of their hectic schedule to kill Jews. In going to ever more baroque lengths to avoid saying “Islamic” or “Muslim” or “terrorist,” we have somehow managed to internalize the pathologies of these men.
We are enjoined to be “understanding,” and we’re doing our best. A Minnesotan suicide bomber (now there’s a phrase) originally from Somalia returned to the old country and blew up himself and 29 other people last October. His family prevailed upon your government to have his parts (or as many of them as could be sifted from the debris) returned to the United States at taxpayer expense and buried in Burnsville Cemetery. Well, hey, in the current climate, what’s the big deal about a federal bailout of jihad operational expenses? If that’s not “too big to fail,” what is?
Last week, a Canadian critic reprimanded me for failing to understand that Muslims feel “vulnerable.” Au contraire, they project tremendous cultural confidence, as well they might: They’re the world’s fastest-growing population. A prominent British Muslim announced the other day that, when the United Kingdom becomes a Muslim state, non-Muslims will be required to wear insignia identifying them as infidels. If he’s feeling “vulnerable,” he’s doing a terrific job of covering it up.
We are told that the “vast majority” of the 1.6-1.8 billion Muslims (in Deepak Chopra’s estimate) are “moderate.” Maybe so, but they’re also quiet. And, as the AIDs activists used to say, “Silence=Acceptance.” It equals acceptance of the things done in the name of their faith. Rabbi Holtzberg was not murdered because of a territorial dispute over Kashmir or because of Bush’s foreign policy. He was murdered in the name of Islam — “Allahu Akbar.”
I wrote in my book, America Alone, that “reforming” Islam is something only Muslims can do. But they show very little sign of being interested in doing it, and the rest of us are inclined to accept that. Spread a rumor that a Koran got flushed down the can at Gitmo, and there’ll be rioting throughout the Muslim world. Publish some dull cartoons in a minor Danish newspaper, and there’ll be protests around the planet. But slaughter the young pregnant wife of a rabbi in Bombay in the name of Allah, and that’s just business as usual. And, if it is somehow “understandable” that for the first time in history it’s no longer safe for a Jew to live in India, then we are greasing the skids for a very slippery slope. Muslims, the AP headline informs us, “worry about image.” Not enough.
Like Mark Draughn, I've been somewhat skeptical of Barry Cooper, the former drug cop turned pitchman for how-to-beat-the-cops videos. He comes off as more of a huckster than a principled whistle-blower, which I think does the good ideas he stands for (police reform) more harm than good. But damn. I have to hand it to him. This might be one of the ballsiest moves I've ever seen.
KopBusters rented a house in Odessa, Texas and began growing two small Christmas trees under a grow light similar to those used for growing marijuana. When faced with a suspected marijuana grow, the police usually use illegal FLIR cameras and/or lie on the search warrant affidavit claiming they have probable cause to raid the house. Instead of conducting a proper investigation which usually leads to no probable cause, the Kops lie on the affidavit claiming a confidential informant saw the plants and/or the police could smell marijuana coming from the suspected house.
The trap was set and less than 24 hours later, the Odessa narcotics unit raided the house only to find KopBuster’s attorney waiting under a system of complex gadgetry and spy cameras that streamed online to the KopBuster’s secret mobile office nearby.
To clarify just a bit, according to Cooper, there was nothing illegal going on the bait house, just two evergreen trees and some grow lamps. There was no probable cause. So a couple of questions come up. First, how did the cops get turned on to the house in the first place? Cooper suspects they were using thermal imaging equipment to detect the grow lamps, a practice the Supreme Court has said is illegal. The second question is, what probable cause did the police put on the affidavit to get a judge to sign off on a search warrant? If there was nothing illegal going on in the house, it's difficult to conceive of a scenario where either the police or one of their informants didn't lie to get a warrant. Cooper chose the Odessa police department for baiting because he believes police there instructed an informant to plant marijuana on a woman named Yolanda Madden. She's currently serving an eight-year sentence for possession with intent to distribute. According to Cooper, the informant actually admitted in federal court that he planted the marijuana. Madden was convicted anyway.
The story's worth watching, not only to see if the cops themselves are held accountable for this, but whether the local district attorney tries to come up with a crime with which to charge Cooper and his assistants. I can't imagine such a charge would get very far, but I wouldn't be surprised to see someone try.
New Danish Book Draws Jihadist Ire IPT News December 5, 2008 Print Send Comment RSS It is an equation becoming all too familiar. A new book released in Europe contains essays critical of Islam and illustrations of the Prophet Mohammed. In response, some are calling for blood.
Danish journalist Lars Hedegaard's book Groft Sagt (Rough Talk), was released in Denmark Monday. It is a collection of about 100 of his favorite newspaper columns from a Copenhagen daily. Many of the columns are critical of Islam. In addition, the book features 26 new illustrations from Kurt Westergaard, whose drawings of the Prophet Mohammed in the newspaper Jyllands Posten in 2005 sparked a wave of violent protests.
An Israeli security center is sounding the alarm about calls for a violent backlash after noticing a series of incendiary posts on jihadist web sites. According to an International Institute for Counter-Terrorism (ICT) release, someone identifying himself as Abu Salem posted comments about Hedegaard's book on a website called Hanein, "a mouthpiece for Al-Qaeda and other jihad organizations":
"Abu Salem requests that all who love the Prophet Muhammad help spread the news of the upcoming publication and notify religious leaders of what ‘these pigs' are attempting to do. One forum visitor responded to the post, suggesting that Bin Laden attack Copenhagen, repeating the call: ‘Bin Laden, Copenhagen!' several times. Another forum visitor wrote: ‘Our blood... our souls... our children... our money... all that we have... the entire world… anything so that a single hair of your distinguished head [i.e. Muhammad] is not harmed.'"
In a separate post on another site, the ICT reports an internet user identified as Saqr Al-Islam Al-Maqdasi said a boycott of Danish goods would be an insufficient response. Instead:
"[…] by attacking Denmark everywhere so that it be known we are a nation sacrificing itself for Islam and its Prophet […] this cattle doesn't understand anything but the language of rage, and we will decapitate the heads and set fire to the ground underneath their feet. They do not understand anything but the language of blood and scattering of body parts. I ask that Allah make successful the way of the loyal Jihad warriors, in order to blow up and set fire to Denmark."
In an interview with the Investigative Project on Terrorism, Hedegaard said he has been in communication with Danish law enforcement but isn't letting the threatening response curtail his activities. His book is being used by jihadists looking for an excuse to justify their violence. "It is quite obvious that they think it is the right moment to strike a new offensive against Denmark and against free speech. It could be anything. This is planned. This is orchestrated."
In February, Danish police arrested three men suspected of planning to kill Westergaard, who had been forced into hiding after the 2005 publication of his Mohammed illustrations. Many Muslims consider any image of Mohammed to be blasphemous.
The response to perceived insults against Islam has grown increasingly violent.
In 2004, filmmaker Theo van Gogh was murdered on an Amsterdam street by a Dutch Moroccan angered by his film "Submission." The murderer stuck a note on van Gogh threatening Ayaan Hirsi Ali, who developed the idea for the film and wrote it. Since that time, Ms. Ali, formerly a Dutch MP, has had to live with constant protection, often a contentious issue in the Netherlands.
In September, the home of British publisher Martin Rynja was firebombed in advance of the publication of the novel "The Jewel of Medina," a fictional account of the life of Aisha, a child bride of the Prophet Mohammed.
These incidents make it more important to continue issuing work that may offend some people, Hedegaard said. "The point has to be made again and again. We live in a country with free speech. Unless we make this point again and again, every day, we don't have free speech."
Most of the columns in the book are not about Islam. Others deal with foreign policy, religion and "idiots that need to be taken down."
Hedegaard's newspaper, Berlinske Tidende, let him go earlier this year. His bosses told him he was getting boring and repetitive but he said he thinks they were bowing to pressure from his critics. As the new controversy brews, he said he feels he has strong public support, but felt Danish journalists and academics were either passive or hostile toward him.
Despite the controversy and the threats accompanying it, Hedegaard vowed to continue speaking his mind. Whether those threats should ever target him personally is not something he thinks about.
"I cannot live that way," he said. "I might as well be dead. It's like dying before you die... Death is when you are forced to shut up. I don't want them to shut me up before I die physically."
RHL is my favorite Antifederalist, and one of the fellas who argued against a "select militia," favoring one formed of "the whole of the yeomanry" instead. This latter point is important as those who would disappear the Second Amendment by claiming its protections have devolved to the National Guard are ignoring the "select militia"--meaning a militia comprised of specifically appointed or empowered people rather that the citizenry as a whole--fears the Antifederalists not only clearly expressed but used to buttress arguments for a Bill of Rights.
Some Richard Henry Lee quotes follow:
A militia when properly formed are in fact the people themselves... and include all men capable of bearing arms. . . To preserve liberty it is essential that the whole body of people always possess arms... The mind that aims at a select militia, must be influenced by a truly anti-republican principle. Richard Henry Lee, Additional Letters From The Federal Farmer, 1788, at 169
No free government was ever founded, or ever preserved its liberty, without uniting the characters of the citizen and soldier in those destined for the defense of the state...such area well-regulated militia, composed of the freeholders, citizen and husbandman, who take up arms to preserve their property, as individuals, and their rights as freemen. Richard Henry Lee, State Gazette (Charleston), September 8, 1788
To preserve liberty, it is essential that the whole body of people always possess arms, and be taught alike, especially when young, how to use them... Richard Henry Lee, 1788, Additional Letters From The Federal Farmer 53, 1788
To say that a bad government must be established for fear of anarchy is really saying that we should kill ourselves for fear of dying.
The constitution ought to secure a genuine militia and guard against a select militia... all regulations tending to render this general militia useless and defenseless, by establishing select corps of militia, or distinct bodies of military men, not having permanent interests and attachments to the community ought to be avoided.
We should be getting used to the depressing spectacle of once-great corporations begging for assistance from Washington. Yet perhaps nothing is more painful than to see General Motors and other big U.S.-based car companies – once exemplars of both American economic supremacy and middle-class aspirations – fall to such an appalling state.
Yet if GM represents all that is bad about the American economy, particularly manufacturing, it does not represent the breadth of our industrial landscape. Indeed, even as the dull-witted leviathan sinks, many nimble companies have shown remarkable resiliency.
These include a series of small and mid-sized firms – in fields as diverse as garments and agricultural machinery, steel and energy equipment – that have managed to thrive in recent years. It also includes a growing contingent of foreign-owned firms, notably in the automobile industry, that have found that "Made in America" is not necessarily uncompetitive, unprofitable or impossible.
Indeed, until the globalization of the financial crisis, American manufacturing exports were reaching record levels. Overall, U.S. industry has become among the most productive in the world – output has doubled over the past 25 years, and productivity has grown at a rate twice that of the rest of the economy. Far from dead, our manufacturing sector is the world's largest, with 5% of the world's population producing five times their share in industrial goods.
So what is the problem then? If it is not the effort and ingenuity of American workers or our infrastructure, Detroit's problems must lie somewhere else, largely with almost insanely bad management.
We have to remember that the Big Three have been losing market share through even the best of times. Their litany of excuses is as tiresome as their product lines. Back in the 1970s it was "cheap" Japanese labor, something that can no longer be cited as an excuse. European car makers, if anything, have even higher wage costs.
Then there is high gas prices – a good excuse, it appears, back in the 1970s, as well as more recently. But the Detroit auto industry has now had three decades to come up with fuel efficient products that are also fun to drive and reliable. While they have slumbered, the Japanese, Koreans and now the Europeans – with products like the new Volkswagen Jetta – have made enormous strides.
Now it is the credit crunch, the car makers say. OK. Will increased credit mean that people will suddenly scoop up the same products they have been deserting in droves for decades? Keep in mind that the desertion could get even worse if the congressional greens – led by new Energy and Commerce Committee Chairman Rep. Henry Waxman – impose stiffer taxes on gas, which will hurt the guzzlers that have generated most of Big Three profits.
So why the push to bail out the Big Three? It's basically about regional politics. The deindustrializing states of California and New York may not care much, but the big car companies' operations are overwhelmingly concentrated in the politically volatile Great Lakes region, an area that proved decisive in President-elect Obama's victory. Another big reason may be that up to 240,000 jobs in Illinois, the nation's new political epicenter, are tied to the big automakers.
Sadly, dependence on the Big Three has had long-term tragic results for this entire region. Between 2000 and 2007 – before the onset of the financial crisis – the nation's largest percentage losses of manufacturing jobs were concentrated in Big Three bastions like Detroit, Warren-Farmington Hills, Saginaw, Flint and Cleveland. In the five years before the onset of the financial crisis, Michigan alone had lost one-third of its auto manufacturing jobs. Now that figure is up to half.
Worse still has been the psychological dependency that has grown from this troubled relationship. By their very nature, declining businesses – particularly unionized ones – tend to protect their older members and encrusted bureaucracies more than they look to the future. This also creates a political environment where the incentive is not to spur innovation, but to protect the already established.
Michigan, for example, has met the challenge of its Big Three habit with a combination of farce and failure. Under the clueless leadership of its governor, Jennifer Granholm, the state first hoped its "cool cities" program would keep young, educated workers close to home. After that failed to work, the governor then pushed the highest tax boost in state history, a reliable job-killer.
So let us be clear. It did not take a world financial crisis to sink Michigan; it was getting there very well on its own. Nearly one in three residents, according to a July 2006 Detroit News poll, believe that Michigan is "a dying state." Two in five of the state's residents under 35 said they were seriously considering leaving the state.
Fortunately, the Big Three do not represent the entire picture of American manufacturing. Even within the Great Lakes region, Wisconsin, which ranks second in per capita employment in manufacturing, has held onto most of its industrial employment due to its large, highly diversified base of smaller-scale specialized manufacturers.
If Congress and President Obama want to figure out how to restart our industrial economy, they need to travel not to Detroit but to an alternative universe that includes the South and Appalachia, where most of the new foreign-owned auto manufacturers have clustered. States like Alabama, with the second-largest per capita concentration of auto-related jobs, as well as South Carolina, Tennessee, Kentucky, Georgia and Mississippi, have been growing these high-wage jobs for a new generation. In the process, they have brought unprecedented opportunity to some of the nation's historically poorest regions.
Nor are these states looking to remain mere assembly centers. For example, they have launched bold new research initiatives, such as the recently formed International Automotive Research Center at Clemson University, which offers the nation's only Ph.D. in automotive engineering, to make their region a major center of technological innovation for the industry. And the fact that the region will likely be producing the majority of the most low-mileage and low-emission cars certainly cannot hurt their future prospects.
However, it is also critical to see beyond merely autos. If you look at the period between 2000 and 2007, as we did at the Praxis Strategy Group, much of the fastest growth in manufacturing was taking place in areas tied to energy production like Midland and Longview, Texas, and Morgantown, W.Va., all of which enjoyed 15% or more increases in manufacturing jobs. Already states like Arkansas, Alabama, Iowa and Mississippi boast more per capita industrial jobs than either Michigan or Ohio.
Another strong performer has been the Great Plains. Places like Dubuque, Iowa, and Fargo and Grand Forks, N.D., experienced substantial growth in industrial jobs during the past decade. The base here, as in Wisconsin, is highly diverse and includes agricultural and construction equipment, electronics as well as a burgeoning sector in the renewable fuels sector, such as LM Glasfibre, a Danish firm with a large operation in Grand Forks. Washington state has been another bright spot, powered by Boeing and other manufacturers attracted to its low-cost, low-emission hydropower.
If the country is serious about enhancing U.S. industrial might – as it should be – it might want to ask executives and entrepreneurs in these areas, as well as foreign investors, what they need to keep growing and expanding exports. There is clearly a demonstrated global market for Boeing airplanes and Caterpillar construction and agricultural machinery, as well as a host of high-tech and fashion-related products now being churned out in factories scattered across the country.
The people running these firms should be those at the congressional hearings, not the pathetic losers from companies like General Motors. They might even have some helpful ideas, like streamlining regulations, investing in critical infrastructure and research facilities, expanding support for training a new generation of skilled blue collar workers and using incentives to encourage firms to improve their energy efficiency. These are the steps we can expect our competitors in Europe, Asia and the developing world to take as well.
Rather than looking for ways to bail out the most egregious serial failures, let us find ways to provide incentives for those successful at creating new jobs and saving existing ones.
This article originally appeared at Forbes.com.
Joel Kotkin is executive editor of NewGeography.com and is a presidential fellow in urban futures at Chapman University. He is author of The City: A Global History and is finishing a book on the American future.
Sun's Magnetic Field May Impact Weather And Climate: Sun Cycle Can Predict Rainfall Fluctuations
The sun's magnetic field may have a significant impact on weather and climatic parameters in Australia and other countries in the northern and southern hemispheres. (Credit: iStockphoto) ScienceDaily (Dec. 3, 2008) — The sun’s magnetic field may have a significant impact on weather and climatic parameters in Australia and other countries in the northern and southern hemispheres. According to a study in Geographical Research, the droughts are related to the solar magnetic phases and not the greenhouse effect.
The study uses data from 1876 to the present to examine the correlation between solar cycles and the extreme rainfall in Australia.
It finds that the Southern Oscillation Index (SOI) – the basic tool for forecasting variations in global and oceanic patterns – and rainfall fluctuations recorded over the last decade are similar to those in 1914 -1924.
Author Professor Robert G. V. Baker from the School of Environmental Studies, University of New England, Australia, says, “The interaction between the directionality in the Sun’s and Earth’s magnetic fields, the incidence of ultraviolet radiation over the tropical Pacific, and changes in sea surface temperatures with cloud cover – could all contribute to an explanation of substantial changes in the SOI from solar cycle fluctuations. If solar cycles continue to show relational values to climate patterns, there is the potential for more accurate forecasting through to 2010 and possibly beyond.”
The SOI-solar association has been investigated recently due to increasing interest in the relationship between the sun’s cycles and the climate. The solar application offers the potential for the long-range prediction of SOI behavior and associated rainfall variations, since quasi-periodicity in solar activity results in an expected cycle of situations and phases that are not random events.
Professor Baker adds, “This discovery could substantially advance forecasting from months to decades. It should result in much better long-term management of agricultural production and water resources, in areas where rainfall is correlated to SOI and El Niño (ENSO) events.”
Baker et al. Exploratory Analysis of Similarities in Solar Cycle Magnetic Phases with Southern Oscillation Index Fluctuations in Eastern Australia. Geographical Research, 2008; 46 (4): 380 DOI: 10.1111/j.1745-5871.2008.00537.x Adapted from materials provided by Wiley - Blackwell.
No one expects the Vatican to apologize for the IRA.
Well yes, but the Vatican would condemn violence perpetrated in the name of Christianity. Think your comparison is between apples and oranges.
The point remains that some fairly tame cartoons caused all sorts of outrage in Muslim circles, yet an incident orders of magnitude more horrific inspires little public outcry in the same circles. A case of misapplied outrage, yes?
Everything worked as long as housing prices continued to rise. Suddenly, though, there weren't enough buyers. (See "Houses of Pain," page 40.) At the same time, the first wave of the more exotic mortgages began to falter. Interest rates on adjustable-rate mortgages moved higher; the Fed was finally constricting the money flow, with the federal funds rate peaking at 5.25 percent in July 2006. Mortgages that were initially interestonly were close to resetting, with monthly payments jumping to include principal. A significant number of these mortgages moved into default and foreclosure, which further dampened housing prices.
The overall foreclosure numbers were small; someone simply looking at housing statistics could be forgiven for wondering what all the fuss was about. Nationally, throughout 2007 and 2008, the number of mortgages moving into foreclosure was only about 1 percent to 2 percent, suggesting that 98 percent to 99 percent of mortgages are sound. But the foreclosed mortgages punched way above their weight class; they were laced throughout the mortgage-backed securities owned by most financial institutions.
The complexity of these financial products cannot be overstated. They usually had two or three "tranches," different baskets of mortgages that paid out in different ways. Worse, as different firms bought and sold them, they were sliced and diced in varying ways. A mortgage-backed security owned by one company could be very different when it was sold to another.
No one fully understood how exposed the mortgage-backed securities were to the rising foreclosures. Because of this uncertainty, it was hard to place a value on them, and the market for the instruments dried up. Accounting regulations required firms to value their assets using the "mark-to-market" rule, i.e., based on the price they could fetch that very day. Because no one was trading mortgage-backed securities anymore, most had to be "marked" at something close to zero.
This threw off banks' capital-to-loan ratios. The law requires banks to hold assets equal to a certain percentage of the loans they give out. Lots of financial institutions had mortgage-backed securities on their books. With the value of these securities moving to zero (at least in accounting terms), banks didn't have enough capital on hand for the loans that were outstanding. So banks rushed to raise money, which raised self-fulfilling fears about their solvency.
Two simple regulatory tweaks could have prevented much of the carnage. Suspending mark-to-market accounting rules (using a five-year rolling average valuation instead, for example) would have helped shore up the balance sheets of some banks. And a temporary easing of capital requirements would have given banks the breathing room to sort out the mortgage-backed security mess. Although it is hard to fix an exact price for these securities in this market, given that 98 percent of underlying mortgages are sound, they clearly aren't worth zero. (For more proposed solutions, see "Better Than a Bailout," page 30.)
Alas, the Fed and the Treasury Department, in full crisis mode, decided to provide their own capital to meet the regulatory requirements. The first misstep, in March, was to force a hostile takeover of Bear Stearns, putting up $30 billion to $40 billion to back J.P. Morgan's purchase of the distressed investment bank. In the long term, it probably would have been better to let Bear Stearns fail and go into bankruptcy. That would have set in motion legal proceedings that would have established a baseline price for mortgage-backed securities. From this established price, banks could have begun to sort out their balance sheets.
Immediately after the collapse of Bear Stearns, rumors circulated on Wall Street of trouble at another investment bank, Lehman Brothers. Lehman went on a P.R. offensive to beat back those rumors. The company was successful in the short term but then did nothing during the next several months to shore up its balance sheet. Its demise in September-the only major bankruptcy allowed during bailout season-was largely self-inflicted.
The collapse of the mortgage-backed security market now started to pollute other financial products. Collateralized debt obligations and credit default swaps are complicated financial products intended to help spread the risk of defaults. An investor holding a bond or mortgage-backed security may purchase one of these products so that, in the event the bond or mortgage-backed security defaults, they would recoup their investment. Bonds rarely default, so collateralized debt obligations and credit default swaps had traditionally been a fairly safe and conservative market.
But like the underlying bonds and mortgage-backed securities, these instruments became more exotic. Companies sold credit default swaps on an individual bond or security to multiple investors. If there was a default, each one of these investors would have to be paid up to the full amount of the bond or security. Imagine if you bought fire insurance on your house and all your neighbors did too. If your house burned, everyone would be compensated for the loss of your house.
Suddenly, stable firms such as AIG, which aggressively sold credit default swaps, were over-exposed. These developments threw off the accounting in one division of AIG, threatening the rest of the firm. Given a few days, AIG could have sold enough assets to cover the spread, but ironclad accounting regulations precluded this. So the government stepped in.
The one-two punch of Lehman's failure and the government's $85 billion bailout of AIG on September 16 spooked both Wall Street and the White House. With Fannie Mae and Freddie Mac already in government receivership, there were fears that the weakness stemming from mortgage-backed securities would spread through the entire financial system. Money began leaving the markets to seek the security of Treasury bonds.
Then, on September 18, it was reported that the Reserve Primary Fund and the Reserve International Liquidity Fund, two commercial paper money market funds, "broke the buck," meaning they lost money. The commercial paper market is supposed to be boring. Every day, companies around the world borrow hundreds of billions to smooth cash flows; the next day they pay it back, giving the bank that lent the money a very small return. When these money market funds lost money, it was a signal that the commercial paper market was drying up, that banks were hesitant to make even these very safe loans.
That's when the market freaked out. The Dow Jones Industrial Average fell over 600 points on September 19. When the government announced that there would be a rescue plan, the market temporarily rebounded. After some details of the plan emerged over the weekend, the Dow had another selloff. A roller-coaster of selloffs and rallies followed, as the market waited to see what the government would do. Every gyration, up or down, was used as an argument for the bailout. If the market moved lower, it was because Congress hadn't approved the bailout. If it moved higher, it was because the market was convinced the bailout would happen. On October 2, after initially defeating the package, the House of Representatives bowed to pressure and passed it.
The original plan crafted by the Treasury Department would have authorized the government to spend up to $700 billion on mortgage-backed securities and other "toxic" debt, thereby removing them from banks' balance sheets. With the "bad loans" off the books, the banks would become sound. Because it was assumed that the mortgage-backed security market was "illiquid," the government would become the buyer of last resort for these products. There was a certain simple elegance to the plan. To paraphrase H.L. Mencken, the solution was neat, plausible, and wrong.
No market is truly illiquid. Last summer, Merrill Lynch unloaded a bunch of bad debt at 22 cents on the dollar. There are likely plenty of buyers for the banks' toxic debt, just not at the price the banks would prefer. Enter the government, which clearly intended to purchase mortgage-backed securities at some premium above the market price.
We don't know yet what the premium will be nor how it will be determined. Well, in a sense we do. It will mostly be determined by politics, not economics. This is the foundational flaw in the Treasury Department plan.
The department has begun a process to determine the assets it will buy and the manner it will set a price. As with everything in government, these are lobbyable moments, a time when swarms of financial service firms, investor groups, and housing advocates try to game the system for their clients or members. The further away from economics these decisions are made, the more risk there is for taxpayers. The higher the premium over any current market price, the longer the government will have to hold the assets and the more exposure there will be for taxpayers.
The risk here is particularly high given the complicated and opaque nature of the financial instruments involved. Few on Wall Street truly understand these products. The bailout authorizes the Treasury Department to bypass normal contracting rules and hire outside private firms to handle the purchases and manage the toxic assets. The fact that these private firms have ongoing relationships with the banks selling the bad assets creates a serious conflict of interest.
Some commentators have drawn parallels to the savings and loan bailout in the 1980s, when the government established the Resolution Trust Corporation to dispose of the assets of failed thrifts. But the Resolution Trust Corporation took on those assets only as thrifts went bankrupt. Under the new plan, by contrast, federal bureaucrats and their outside contractors decide which assets to buy, including equity stakes in commercial banks that aren't particularly happy about having Uncle Sam as a major shareholder. Bureaucrats will be actively investing taxpayer funds in individual securities and then managing the portfolio until they decide to sell. You don't have to be paranoid to fear the political dynamics that will shape these decisions.
More to Come
We have crossed a financial Rubicon. The bailout is just the beginning of Washington's increased involvement in the economy. The government has now taken partial ownership of the nation's nine largest banks. There is talk of bailouts for other weak industries, including the carmakers and the airlines. There certainly will be a host of new regulations that will likely be with us long after the government has sold off the last of the bad debt. We could be entering an era where the financial services sector evolves into a kind of regulated utility.
Libertarians used to joke that we were on the verge of another rerun of That '70s Show, with a return to old regulations and high taxation. We should be so lucky. The events of the last several months presage a return to the 1930s, with a new surge of direct federal involvement in the economy. If we fail to beat back these new controls, future historians may mark this time as the beginning of a long winter of statism and stagnation.
Mike Flynn is director of government affairs at the Reason Foundation.
Concerted government policy helped trigger the financial meltdown—and will almost certainly extend it.
Michael Flynn | January 2009 Print Edition
It was not an absence of federal intervention that produced the Great Financial Panic of 2008. Contrary to the assertions of those clamoring for new regulations (see "Is Deregulation to Blame?," page 36), the liquidity shortage and credit freeze that triggered Washington's biggest intrusion into the economy since Richard Nixon's wage and price controls were caused by bad government policy and worse crisis management.
As the housing bubble inflated from 1997 to 2006, banks, fueled by the Federal Reserve, prodded by activists, and egged on by Wall Street, created ever more exotic mortgage loans that pushed up housing prices and extended mortgage debt to families vulnerable to economic downturns. Several layers of financial products were tied to these mortgages. As some of the derivative instruments and underlying mortgages collapsed, collateral damage raced through the entire system.
In 2008 the Bush administration took a series of frantic steps to stop the bleeding. It backed a hostile takeover of the investment bank Bear Stearns. It took over home lending behemoths Fannie Mae and Freddie Mac, an act that put $5 trillion worth of mortgages—more than $1 trillion of which are subprime—on the federal government's books, not to mention the $200 billion it had to commit to guarantee Fannie and Freddie's debts. It made hundreds of billions of dollars available to banks through the Fed's "discount window," its mechanism to make short-term loans to certain institutions, put up $85 billion to take over the insurance giant AIG, and offered another $250 billion to individual banks to rebuild their balance sheets.
In October the administration convinced Congress to authorize the Treasury Department to spend upward of $700 billion buying up toxic mortgage-backed securities, most of which contain sizeable numbers of subprime mortgages. Each step not only failed to calm the market but seemed to increase the sense of impending doom (also fanned by sky-is-falling pronouncements from President Bush on down). After a month of U.S. government action, the mortgage crisis had grown into a global financial panic, the repercussions of which we'll be living with for decades.
The Roots of the Crisis
Throughout the 1990s and the early years of this century, both major political parties became intoxicated with the idea of promoting "affordable" housing. By the time the crisis blew up, Congress was mandating that roughly 50 percent of the mortgages issued by Fannie and Freddie go to households making below their area's median income.
Many conservative commentators have blamed the housing mess on the 1977 Community Reinvestment Act (CRA), which essentially required banks to increase lending in low-income areas. While the CRA was a bad law, its role in recent events has been overblown. After all, it was on the books for decades before the bubble began. The law's worst legacy is the permanent network of "affordable housing" advocates that sprang up after it passed. These groups, which were intended to facilitate lending in poor areas, continually called for increased activity by banks and additional government support for affordable housing initiatives. The CRA also helped create a climate in which lending to low-income households was a key metric and condition regulators used in approving bank mergers.
Other, more recent developments played a bigger role in the financial crisis. In 1993 the Federal Reserve Bank of Boston published "Closing the Gap: A Guide to Equal Opportunity Lending." The report recommended a series of measures to better serve low-income and minority households. Most of the recommendations were routine and mundane: better staff training, improved outreach and communication, and the like. But the report also urged banks to loosen their income thresholds for receiving a mortgage. In the years after the report was published, activists and officials—especially in the Department of Housing and Urban Development, under both Bill Clinton and George W. Bush—used its findings to pressure banks to increase their lending to low-income households. By the turn of the century, other changes in federal policy made those demands more achievable.
You can't lend money if you don't have it. And beginning in 2001, the Federal Reserve made sure lots of people had it. In January 2001, when President Bush took office, the federal funds rate, the key benchmark for all interest rates in this country, was 6.5 percent. Then, in response to the meltdown in the technology sector, the Fed began cutting the rate. By August 2001, it was at 3.75 percent. And after the terrorist attacks of September 11, the Fed opened the spigot. By the summer of 2002, the federal funds rate was 1 percent.
The central bank's efforts went so far that, at one point in 2003, we had interest rates below the rate of inflation, or effectively negative. Institutional investors, looking at low yields on Treasury securities, needed a place to park money and earn some kind of return. Mortgage-backed securities became a favorite investment vehicle. Under traditional models, they were very safe and, because of Fed policy, even the most conservative fund could earn better returns than they could on Treasury notes.
Investment houses would bundle individual mortgages from several banks together into bond-like products that they would sell to individual investors. Mortgages historically have been seen as among the safest investments, and the era of rising house values transformed "safe" into "guaranteed returns."
For the first half of this decade, trading in mortgage-backed securities exploded. Their growth provided unprecedented levels of capital in the mortgage market. At the same time, investment houses were looking to replace the healthy fees earned during the dot-com bubble. Mortgage-backed securities had fat margins, so everyone jumped into the game.
The additional capital to underwrite mortgages was a good thing—up to a point. Homeownership expanded throughout most of Bush's presidency. During the last few decades, the American homeownership rate has been around 60 percent of adult households. At the height of the bubble, it reached almost 70 percent. It is clear now that many people who got mortgages at the high-water mark should not have. But Wall Street needed to feed the stream of mortgage-backed securities.
Fannie and Freddie
It's hard to overstate the role Fannie Mae and Freddie Mac played in creating this crisis. Chartered by Congress, Fannie in 1938 and Freddie in 1970, the two government-sponsored enterprises provided much of the liquidity for the nation's housing market. Because investors believed—correctly, it turns out—that Fannie Mae and Freddie Mac were backed by an implicit guarantee from the federal government, the companies were able to raise money more cheaply than their competitors. They were also exempt from federal, state, and local taxes.
The chief mission of Fannie Mae and Freddie Mac was to buy up mortgages issued by banks, freeing up bank money for additional mortgages. Fannie and Freddie would package these mortgages into mortgage-backed securities and sell those on the secondary mortgage market, providing cash to continue the cycle. Even when selling these securities, they often retained the full risk for any default, pocketing a portion of the interest payments in return.
Fannie and Freddie would also keep a portion of these mortgages in their own investment portfolios, providing a constant influx of interest payments. Starting in the 1990s, they increasingly created and traded in complex derivatives, financial instruments designed to insulate them, through hedging, from mortgage loan defaults and interest rate increases. From the mid-'90s through the early 2000s, Fannie Mae and Freddie Mac were the darlings of Wall Street, with steady earnings growth and solid credit ratings. Fannie's share priced peaked in 2001 almost 400 percent above its 1995 level; Freddie peaked in 2004, almost 500 percent higher than in 1995. This growth would not last.
In June 2003, Freddie Mac surprised Washington and Wall Street with a management shakeup. The top executives were sent packing, and a new auditor, PricewaterhouseCoopers, identified several accounting irregularities on the company's books, especially related to its portfolio of derivatives. The company would have to restate earnings for the previous several years.
Just days before, the agency responsible for regulating Freddie, the Office of Federal Housing Enterprise Oversight, had reported to Congress that the company's management "effectively conveys an appropriate message of integrity and ethical values." Just how wrong this assessment was would soon become abundantly clear.
As the extent of the accounting irregularities emerged, federal regulators descended on the company and quickly determined that the accounting troubles extended to Fannie Mae as well. With concerns about the companies growing, the Bush administration unveiled proposals to rein them in. Then-Treasury Secretary John Snow proposed putting Fannie and Freddie under his department's oversight and subjecting them to the kind of controls over risk and capital reserves that apply to commercial banks. (Fannie's debt-to-capital ratio was 30 to 1, whereas conventional banks have debt-to-capital ratios of around 11 to 1.)
But Fannie and Freddie by this point were political powerhouses. When the accounting scandal first emerged, Fannie's chairman was Franklin Raines, former director of the Office of Management and Budget under President Bill Clinton. Its vice chairman was Jamie Gorelick, a former Justice Department official who had served on the 9/11 commission. The two companies provided tens of millions of dollars in annual campaign contributions and spent more than $10 million a year combined on outside lobbyists.
Fannie and Freddie rallied their friends on Capitol Hill, who immediately pushed back against the Bush proposals. Rep. Barney Frank (D-Mass.), the ranking Democrat on the House Financial Services Committee, said, "These two entities-Fannie Mae and Freddie Mac-are not facing any kind of financial crisis. The more people exaggerate these problems, the more pressure there is on these companies, the less we will see in terms of affordable housing." The reform effort fizzled.
In 2006 the Office of Federal Housing Enterprise Oversight issued the blistering results of its investigation. The irregularities, investigators concluded, amounted to "extensive financial fraud." The purpose of the deception was clear: to "smooth" earnings from year to year in order to maintain increasing returns and maximize executive bonuses. Raines, for example, earned more than $50 million in bonuses tied to earnings growth during his six-year tenure.
Interestingly, the report noted two questionable transactions Fannie conducted with the investment bank Goldman Sachs in 2001 and 2002 that pushed more than $100 million of existing profits into the future, creating a kind of cushion for future earnings. The chairman of Goldman Sachs when the dodgy transactions took place was the man behind the 2008 bailout: Treasury Secretary Henry Paulson.
In the end, Fannie and Freddie had to restate more than $15 billion in earnings. The Office of Federal Housing Enterprise Oversight and the Securities and Exchange Commission fined Fannie $400 million and Freddie $125 million. There was a new push for tighter oversight on the Hill, but this too withered as Fannie and Freddie rallied support through increased lending to low-income borrowers.
Then Fannie and Freddie went on a subprime bender. The companies made it clear they wanted to buy up all the subprime mortgages—and Alt-A mortgages, whose risk is somewhere between prime and subprime—that they could find. They eventually acquired around $1 trillion of the paper. The market responded. In 2003 less than 8 percent of all mortgages were subprime. By 2006 the number was more than 20 percent. Banks knew they could sell subprime products to Fannie and Freddie. Investments banks realized that if they laced ever-increasing amounts of subprime mortgages into mortgage-backed securities, they could add slightly higher levels of risk and, as a result, boost the returns and earn bigger fees. The ratings agencies, thinking they were simply dealing with traditionally appreciating mortgages, didn't look under the hood.
But after several years of a housing boom, the pool of households that could responsibly use the more exotic financing products had dried up. Essentially, there were no more people who qualified for even a subprime mortgage.
Banks realized they could make ever more exotic loan products (such as interest-only loans), get the affordable housing activists off their backs, and immediately diffuse their risks by folding the mortgages into mortgage-backed securities. After all, Fannie and Freddie would buy anything.
Misfiled, I suspect, but it does give one pause: there are a lotta civilians in my sphere far better equipped and far better trained than India's constabulary.
Cops just had 577 rifles, hadn't fired in 10 yrs 2 Dec 2008, 1203 hrs IST, Prafulla Marpakwar, TNN
MUMBAI: The state constabulary was grossly unprepared to deal with the worst-ever terror attacks on the metropolis because of an acute shortage of weapons and ammunition.
Official records show that for a force of well over 1.8 lakh, the home department procured a meagre 2,221 weapons — 577 for Mumbai, and 1,644 for the rest of Maharashtra.
‘‘Under the centrally sponsored modernisation programme, we purchased almost all types of weapons, but for a state like Maharashtra, the number of weapons was grossly inadequate ,’’ a senior official told TOI on Monday.
In the absence of a firing range and of ammunition for practice, members of the law enforcement agencies have not opened fire in the last ten years. ‘‘I’ve been in the police force for a long time, but I had no occasion to open fire for practice,’’ a senior inspector of police said.
As per the police manual, officials ranking from constable to assistant inspector get rifles with 30 rounds each, and those with the rank of police sub-inspector and above get revolvers, also with 30 rounds each.
Jawans with the State Reserve Police Force are given SLRs or self-loading rifles. In addition, AK-47 rifles have been given to officials posted in areas where there is Naxal activity, while officials on VIP security duty are armed with either revolvers or carbines.
The manual also prescribes mandatory training for all officials, especially shooting practice at the firing range. According to a senior IPS official, the norms prescribed in the manual now exist only on paper because of the acute shortage of ammunition for practice and the non-availability of a firing range.
As per the rules, every district should have a firing range exclusively for the police. But official records indicate that more than half the state’s districts have no independent firing range.
‘‘We have constables who have not opened fire even for practice ever since their recruitment,’’ the official said.
Enough with the pseudonyms. Western civilization isnt at war with terrorism any more than it is at war with grenades. Western civilization is at war with militant Islam, which dominates Muslim communities all over the world. Militant Islam isnt a tiny minority of otherwise goodhearted Muslims. Its a dominant strain of evil that runs rampant in a population of well over 1 billion.
Enough with the psychoanalysis. They dont hate us because of Israel. They dont hate us because of Kashmir. They dont hate us because we have troops in Saudi Arabia or because we deposed Saddam Hussein. They dont hate us because of Britney Spears. They hate us because we are infidels, and because we dont plan on surrendering or providing them material aid in their war of aggressive expansion.
Enough with the niceties. We dont lose our souls when we treat our enemies as enemies. We dont undermine our principles when we post more police officers in vulnerable areas, or when we send Marines to kill bad guys, or when we torture terrorists for information. And we dont redeem ourselves when we close Guantanamo Bay or try terrorists in civilian courts or censor anti-Islam comics. When it comes to war, extremism in the defense of liberty is no vice, and moderation in the pursuit of justice is no virtue.
Enough with the words. Talking with Iran without wielding the threat of force, either economic or military, wont help. Appealing to the United Nations, run by thugs and dictators ranging from Putin to Chavez to Ahmadinejad, is an exercise in pathetic futility. Evil countries dont suddenly decide to abandon their evil goals -- they are forced to do so by pressure and circumstance.
Enough with the faux allies. We dont gain anything by pretending that Saudi Arabia and Pakistan are true allies. They arent. At best, they are playing both sides of the table. We ought to be drilling now in order to break OPEC. Building windmills isnt going to cut it. We should also be backing India to the hilt in its current conflict with Pakistan -- unless Pakistan can destroy its terrorist element, India should be given full leeway to do what it needs to do. Russia and China, meanwhile, are facilitating anti-Western terrorism. Treating them as friends in this global war is simply begging for a backstabbing.
Enough with the myths. Not everyone on earth is crying out for freedom. There are plenty of people who are happy in their misery, believing that their suffering is part and parcel of a correct religious system. Those people direct their anger outward, targeting unbelievers. We cannot simply knock off dictators and expect indoctrinated populations to rise to the liberal democratic challenge. The election of Hamas in the Gaza Strip is more a rule than an exception in the Islamic world.
Enough with the lies. Stop telling us that Islam is a religion of peace. If it is, prove it through action. Stop telling us that President-elect Barack Obama will fix our broken relationship with the Muslim world. They hate Obama just as much as they hated President George W. Bush, although they think Obama is more of a patsy than Bush was. Stop telling us that we shouldnt worry about the Islamic infiltration of our economy. If the Saudis own a large chunk of our banking institutions and control the oil market, they can certainly leverage their influence in dangerous ways.
Enough. After the World Trade Center, the Pentagon, the plane downed in Pennsylvania, the endless suicide bombings, shootings and rocket attacks in Israel, the Bali bombings, the synagogue bombing in Tunisia, the LAX shootings, the Kenyan hotel bombing, the Casablanca attacks, the Turkey synagogue attacks, the Madrid bombings, the London bombings, and the repeated attacks in India culminating in the Mumbai massacres -- among literally thousands of others -- its about time that the West got the point: were in a war. Our enemies are determined. They will not quit just because we offer them Big Macs, Christina Aguilera CDs, or even the freedom to vote. They will not quit just because we ensure that they have Korans in their Guantanamo cells, or because we offer to ban The Satanic Verses (as India did). They will only quit when they are dead. It is our job to make them so, and to eliminate every obstacle to their destruction.
So enough. No more empty talk. No more idle promises. No more happy ignorance, half measures, or appeasement-minded platitudes. The time for hard-nosed, uncompromising action hasnt merely come -- its been overdue by seven years. The voice of our brothers blood cries out from the ground.
Bad CAIR Day Shutting down an engine for stealth jihad.
By Frank J. Gaffney Jr.
The Council on American Islamic Relations (CAIR) had a rough 24 hours earlier this week. Given the organization’s ties to the seditious Muslim Brotherhood and specifically its role in advancing the stealth jihad used to insinuate into this country the totalitarian program authoritative Islam calls “sharia,” though, any bad CAIR day is a good day for America.
The problems for CAIR started a week ago Sunday night when its co-founder and executive director, Nihad Awad, was served with a court summons during his group’s annual fund-raising dinner in Arlington, Va. In front of some 700 people and the one Muslim member of Congress, Rep. Keith Ellison (D., Minn.), Awad and several other CAIR officials were formally put on notice that they and their organization were being sued for racketeering and fraud by four former clients. The suit seeks, in addition to damages, to shut CAIR down and to enjoin the defendants from engaging in public-interest legal work in the future.
According to the plaintiffs, they were defrauded by Morris Days, a purported “Resident Attorney” and “Manager for Civil Rights” at CAIR’s now-disestablished Maryland/Virginia chapter in Herndon, Virginia. As the complaint details, he was not, in fact, an attorney and allegedly failed to provide the plaintiffs with legal services for which they had paid. According to internal CAIR documents referenced in the complaint, there may be hundreds of other members who were injured by this CAIR-Days fraud.
If so, the other victims may be unaware of what has been perpetrated upon them since CAIR allegedly covered up its failure to check on Days’s background and his misconduct while in its employ. The organization is alleged not only to have concealed this massive fraud from their clients. It also failed to notify law-enforcement authorities, the relevant bar associations, or the public about the wrongdoing.
Instead, according to the complaint, when confronted with members angry about Days’s non-performance, the organization compounded its misdeeds by engaging in a cover-up. CAIR claimed that the “attorney” had not actually been in its employ and concealed the fact that Days had been terminated for engaging in criminal fraud.
Worse yet, the plaintiffs allege that CAIR National officials compelled their clients seeking to have their legal fees refunded to sign a release precluding any revelation of this fraud to the appropriate authorities or the press — on pain of being sued by CAIR for up to $25,000. According to a press release issued by the plaintiff’s counsel, my friend and colleague David Yerushalmi: “This enforced code of silence left hundreds of CAIR client-victims in the dark such that to this day they have not learned that Days is not an attorney and that he had not filed the legal actions on their behalf for which Days and CAIR publicly claimed credit.”
Joining Nihad Awad as named defendants in the federal lawsuit against CAIR are: Parvez Ahmed, chairman of the organization’s board during the period of the alleged misconduct; Tahra Goraya, at the time, the national director of CAIR; manager of the “civil rights” division of CAIR, Khadijah Athman; Nadhira al-Khalili, CAIR’s in-house legal counsel; and Ibrahim Hooper and Amina Rubin, respectively the group’s director and coordinator of communications.
If allegations that CAIR exploits and abuses Muslims in America — rather than, as it endlessly claims, serving and protecting their rights — were not bad enough, a federal jury in Dallas dealt the organization another, potentially devastating, body blow. Jurors found principals of the Holy Land Foundation for Relief and Development guilty as charged in a terrorism-financing conspiracy. CAIR had been named as an unindicted co-conspirator in that case.
Indeed, in the course of the original trial last year and the just-concluded retrial, the prosecution introduced into evidence damning information about CAIR’s ties not only to the Holy Land Foundation but to the Muslim Brotherhood. The precursor to CAIR was the Islamic Association for Palestine (IAP), itself a front for Hamas. IAP was listed as one of a large number of associated groups in a 1991 internal Brotherhood memorandum. The memo laid out the MB’s work in America as a “kind of grand Jihad in eliminating and destroying the Western civilization from within and ‘sabotaging’ its miserable house by their hands and the hands of the believers so that it is eliminated and Allah’s religion is made victorious over all other religions.”
As Steven Emerson’s invaluable Investigative Project on Terrorism has observed:
In June 2007, federal prosecutors…designate[d] CAIR as a co-conspirator because of its associations with the US Muslim Brotherhood's Palestine Committee. Prosecutors say that the Palestine Committee was created specifically to help Hamas through financial and political support in the United States. CAIR co-founder Omar Ahmad (its current chairman emeritus) is listed as an individual committee member and is an unindicted co-conspirator, too. In other cases, CAIR employees have been prosecuted for engaging in their own conspiracies.
The allegations about CAIR’s conduct in the Days’ affair and the guilty verdict rendered against its co-conspirators in the Holy Land case point up a central reality: In the words of a wise lawyer, shady organizations, even stealth ones, invariably engage in culpable conduct no matter how sophisticated they are because there are too many loose ends and you cannot control all of them. It appears that that is what happened with respect to the Muslim Brotherhood’s stealth jihad operatives at CAIR.
Armed with the verdict of the Holy Land Foundation trial, it is high time for federal prosecutors to turn their sights on CAIR beyond simply naming them as an unindicted co-conspirator. By opening up their own investigation based on the evidence already proven in the HLF trial and the troubling allegations in the civil lawsuit, the government may soon turn a bad CAIR day into curtains for this Muslim Brotherhood engine for stealth jihad.
— Frank J. Gaffney Jr. is president of the Center for Security Policy and a contributor to NRO.
November 26th, 2008 The business case for high-seas piracy Post a comment (24) By: Bernd Debusmann Tags: General, Bernd Debusmann, Blackwater, piracy, The Great Debate – Bernd Debusmann is a Reuters columnist. The opinions expressed are his own -
As far as illicit businesses with low risk and high rewards go, it doesn’t get much better than piracy on the high seas. The profit margins can easily surpass those of the cocaine trade. The risks?
“There is no reason not to be a pirate,” according to U.S. Vice Admiral William Gortney, who commands the U.S. navy’s Fifth Fleet. “The vessel I’m trying to pirate, they won’t shoot at me. I’m going to get my money.”
Even pirates who are intercepted have little to fear. “They won’t arrest me because there’s no place to try me.”
Gortney’s assessment of piracy’s low risk came in a radio interview that focused on the Gulf of Aden, where Somali pirates this month capped a string of increasingly brazen hijackings by seizing a Saudi supertanker carrying $100 million worth of U.S.-bound crude. But although attention is focused on the Horn of Africa, piracy is a global phenomenon (see map), relative impunity applies in many places, and a thick legal fog hangs over effective action.
Among questions to keep lawyers busy: Can a naval vessel fire on a suspected pirate ship? It depends. Who would be held accountable for someone killed in an exchange of fire between pirates and private security personnel traveling aboard a merchant ship? Which country’s jurisdiction applies, for example, to a Somali arrested on the high seas and taken aboard a Danish vessel?
“One of the challenges that we have…in piracy clearly is if you are intervening and you capture pirates, is there a path to prosecute them?” Admiral Mike Mullen, the chairman of the U.S. Joint Chiefs of Staff, explained at a recent Pentagon briefing.
A rough back-of-the-envelope calculation shows that the operation to hijack the Saudi tanker, the Sirius Star, cost no more than $25,000 assuming that the pirates bought new equipment and weapons ($450 apiece for an AK-47 Kalashnikov, $5,000 for an RPG 7 grenade launcher, $15,000 for a speedboat). That contrasts with an initial ransom demand from the tanker’s owner, Saudi Aramco, of $25 million.
“Piracy is an excellent business model if you operate from an impoverished, lawless place like Somalia,” says Patrick Cullen, a security expert at the London School of Economics who has been researching piracy. “The risk-reward ratio is just huge.”
One way to shrink that ratio would be to place private security guards on vessels that ply shipping routes prone to pirate attack, from the waters off Nigeria to the Molucca Straits and the Horn of Africa. That’s the solution recommended by the commander of the U.S. Fifth Fleet, whose area of responsibility covers 7.5 million square miles, including the waters off Somalia. Its warships can’t be everywhere.
Even with the additional deployment of warships from France, Britain, Denmark, Russia, India, Japan, Korea, and Malaysia, the navies are looking for needles in a haystack. The pirates launch speedboats from mother ships hundreds of miles off the coast.
BLOW THEM OUT OF THE WATER
Carrying armed guards aboard ships sounds a simple, straightforward solution. They stand watch; they fire warnings flares at an approaching speedboat manned by what looks like pirates. If the vessel doesn’t turn away, they blow it out of the water. End of story.
Except if the incident somehow turned into a court case and the ship’s crew and guards had to prove that the men in the approaching speedboat were driven by criminal intent. By some definitions, an act of piracy doesn’t begin until the grappling hooks are thrown over the side and the pirates start clambering up.
In the past, shipping companies, by and large, have been reluctant to add armed personnel to their crews, partly for reasons of cost - a security team can add $30,000 to $60,000 and more to a voyage - and partly because the statistical chance of having their ships hijacked or attacked are relatively small.
The International Maritime Organization puts the world trading fleet at 50,525 ships. In the first nine months of this year, the International Maritime Bureau’s piracy reporting center in Kuala Lumpur recorded 199 attacks on ships, including 36 hijackings. In percentage terms, this is not much.
But the targets, and the ransom demands, have been getting bigger. The Sirius Star was taken less than two months after the hijacking of a Ukrainian freighter, the Fainu, which carried some 30 T-72 tanks, crates of rocket-propelled grenades, anti-aircraft guns and thousands of rounds of ammunition. That capture made world headlines and raised fresh questions over existing anti-piracy tactics.
Private security firms see new markets and new opportunities. Several British firms have begun teaming up with insurance companies that offer lower rates for ships carrying security teams.
Anti-pirate devices now coming into use range from razor wire strung along the side of ships to sound cannon - a weapon that beams ear-splitting noise at suspected attackers.
One U.S. company, Blackwater Worldwide, is offering maritime escort services with a 183-foot vessel that carries two helicopters, a crew of 15 and 35 guards. Blackwater says 13 shipping companies have expressed interest.
To make pirates think twice about the risk-reward ratio, nothing is likely to be as effective as brute force. But those who warn that 18th century methods can be problematic in the 21st can now point to the example set by the Indian frigate Tabar on November 18.
According to the Indian navy, the Tabar had come under fire from a suspected pirate mother ship that had failed to obey a command to stop.
The Indian frigate returned fire, “in self defense.” The ship blew up in a ball of fire and sank.
A week later, it turned out that the suspected mothership was a Thai freighter that was being taken over by pirates when the frigate approached.
And, the offshore oil would be sold back to the US at the international rate, which today is $106 a barrel.
Not sure what the "international rate" means, but all the price quotes I can find today have oil listed at $55/bbl more or less. I know there are some very perverse incentives in US tax code etc. that make convoluted oil shipment schemes profitable, but hadn't heard the doubled the price.
Be that as it may, solving problems in their entirety is a fairly rare thing; are we not to take what steps we can as we wait for the comprehensive solution to burst on to the scene?
BABY BOYS MAY SHOW SPATIAL SUPREMACY Male superiority on mental rotation tasks may develop within a few months after birth By Bruce Bower Web edition : Tuesday, November 25th, 2008 Text Size NEW ANGLES ON BLOCKSHow long babies spent time looking at rotated blocks and the mirror images of blocks was a measure of the ability to mentally rotate an object.
IMAGE CREDIT: Robert M. Ditto The gender gap in spatial abilities — charted for more than 30 years — emerges within the first few months of life, years earlier than previously thought, psychologists report.
Males typically outperform females on spatial-ability tests by age 4, especially on tasks that require mental rotation of objects perceived as three-dimensional. Yet,
two studies of 3- to 5-month-olds, both published in the November Psychological Science, conclude that a substantially greater proportion of boys than girls distinguish a block arrangement from its mirror image, after having first seen the block arrangement rotated. Babies who prefer looking at the mirror image are presumed to have mentally rotated the block arrangement, recognized it and chosen to gaze at the novel mirror image.
One investigation was conducted by David Moore of Pitzer College in Claremont, Calif., and Scott Johnson of the University of California, Los Angeles. The other was directed by Paul Quinn of the University of Delaware in Newark and Lynn Liben of Pennsylvania State University in University Park.
Both sets of researchers suspect that sex differences in mental rotation develop shortly after birth due to an unknown mix of genetic, biological and environmental influences.
“The result we found was really somewhat of a shocker,” Moore says. He had expected to demonstrate no sex difference in infants’ mental rotation skills, laying the groundwork for pinpointing the age at which this spatial gap first appears.
“Simultaneous reports by two different labs using two different techniques are difficult to dismiss,” remarks psychologist Nora Newcombe of Temple University in Philadelphia.
Still, the new reports don’t confirm that baby boys perform mental rotation tasks better than baby girls do, comments psychologist Susan Levine of the University of Chicago. That’s because both studies first familiarized babies with a block arrangement oriented at specific angles but then presented it from a new angle for comparison with its mirror image, a process that may mask baby girls’ spatial insights.
By 3 months of age, girls — but not boys — may notice changes in a block arrangement’s angle, Levine proposes. If so, girls would regard both a newly oriented block arrangement and its mirror image as novel, spending roughly equal amounts of time looking at both. Scientists have yet to address this possibility, she says.
If infant boys don’t notice angle shifts, they would spend most of the time looking at novel mirror images, Levine suggests. Baby boys would thus falsely appear to be better than baby girls at mental rotation.
“Even if there is an early advantage in favor of males, there is ample research showing that mental rotation skill is malleable,” Levine says. Preschool activities such as block building, assembling jigsaw puzzles and playing certain video games have been linked to stronger mental rotation skill. In 2005, Levine reported that second- and third-graders from poor families, who receive little or no exposure to such activities, show no sex difference in the ability to mentally rotate an object.
Some parents play with their children and babies in ways that promote spatial thinking, such as naming the shapes of toys and guiding a child’s hand to rotate a toy, notes Penn State’s Liben. It’s not known whether parents target such behavior at boys, she says.
Researchers have yet to show that early proficiency on mental rotation tasks translates into an aptitude for spatially challenging subjects such as geometry, geography and science, Levine cautions.
Moore and Johnson showed 20 boys and 20 girls, all 5 months old, videos of a block arrangement rotating back and forth through a 240° angle. Each child sat in his or her mother’s lap as the mother kept her eyes closed. After tiring of looking at this image, infants saw alternating videos of the original block arrangement or its mirror image rotating through a 120° angle.
Video records of infants’ gaze and head movements revealed that 14 boys, or 70 percent of them, preferred looking at mirror images, compared with 9 girls, or 45 percent of them.
Quinn and Liben showed 12 boys and 12 girls, all 3 to 4 months old, a series of images of either a black number 1 or its mirror image, each drawn to appear three-dimensional and situated at a different degree of rotation. Each baby then saw presentations of both the number 1 and its mirror image in a new degree of rotation.
In the latter trials, 11 boys preferred looking at the image that they hadn’t seen before, compared with 5 girls.
It may be possible to study mental rotation in babies within the first few days after birth, Quinn says.
BHO take note: the claimed economic boon to be caused by green technologies isn't surviving scrutiny.
LAO to the rescue
It shreds claims climate policy will spur boom
November 27, 2008
For more than two years, Gov. Arnold Schwarzenegger, leading California Democrats and environmentalists have insisted that AB 32 – a 2006 law requiring California businesses and residents to use cleaner but far more costly sources of energy by 2020 – would actually prove to be an economic bonanza. They asserted it would position the state to lead the world in green technology and reduce societal costs stemming from air pollution.
This claim falls apart under the slightest inspection. Sure, some well-positioned industries might thrive. But how could sharply increasing the operating costs of most businesses and reducing the disposable income of most individuals help the overall economy? Despite its illogic, Schwarzenegger's argument has largely gone unchallenged.
Thankfully, that is no longer the case. Last week, the nonpartisan Legislative Analyst's Office released a study of the California Air Resources Board's “scoping plan” for implementing AB 32 and related measures. In low-key fashion, the study demolished the happy talk of the governor and his allies.
The LAO said the air board's report showing the “purported net economic benefit” of emission-reduction laws used “inconsistent and incomplete” methodology in which researchers ignored or downplayed evidence that countered the economic bonanza thesis.
Here's the most egregious example: Any measure that reduced a company's greenhouse gases in any way was deemed “cost-effective.” By this standard, it's cost-effective for a company to go out of business.
We asked the governor's staff to respond. What we got was a boilerplate statement repeating Schwarzenegger's happy talk and ignoring the LAO's conclusions.
This is unacceptable. In the governor's own words, the LAO has an “impeccable reputation” and has set a “high standard” with its analysis of state issues. When the LAO raises profound questions about the wisdom of a major state policy, state leaders are obligated to take these questions seriously.
We have no doubt Schwarzenegger sincerely believes that it is essential for California, the nation and the world to tackle global warming. But the rhetoric the governor employs to push his cause is devoid of candor.
This was underscored by recent news reports about the European Union, China and India – three of America's biggest trade rivals. All are rethinking recent commitments to cleaner fuels because of the likelihood that the added cost could make it more difficult for their already-reeling economics to rebound. In Europe and Asia, you see, there is a matter-of-fact acceptance that a shift toward cleaner energy has a downside.
The contrast with Schwarzenegger's pretend world could not be more striking. Perhaps we can import some honesty from Beijing, New Delhi or Brussels. On climate change, there's little to be found in Sacramento.
But on a serious note, BBG what do you think? Is polygamy so bad? Should it be illegal? Or do you think it should it be allowed between consenting adults regardless of their religious affiliation?
One sec, let me ask my wife. . . .
Rats, she's opposed to my nominees.
As stated elsewhere, I care not at all what consenting adults do behind closed doors. With that said, any policy should be consistent (yo, Mormons), should not be used as an immigration dodge, should be good for goose or gander, and shouldn't be a vehicle to get lots of folks on the public dole. As also stated elsewhere, I'm pretty uncomfortable with patriarchal systems claiming free choice as they keep their women segregated in veils behind locked doors. Choice should be in the Western sense, instead of the choose any option off this list of one we've discussed elsewhere.
By Daniel Pipes FrontPageMagazine.com | 11/25/2008
A Scottish judge recently bent the law to benefit a polygamous household. The case involved a Muslim male who drove 64 miles per hour in a 30 mph zone – usually grounds for an automatic loss of one's driving license. The defendant's lawyer explained his client's need to speed: "He has one wife in Motherwell and another in Glasgow and sleeps with one one night and stays with the other the next on an alternate basis. Without his driving licence he would be unable to do this on a regular basis." Sympathetic to the polygamist's plight, the judge permitted him to retain his license.
Monogamy, this ruling suggests, long a foundation of Western civilization, is silently eroding under the challenge of Islamic law. Should current trends continue, polygamy could soon be commonplace.
Since the 1950s, Muslim populations have grown in Western Europe and North America via immigration and conversion; with their presence has grown the Islamic form of polygyny (one man married to more than one woman). Estimates find 2,000 or more British polygamous men, 14,000 or 15,000-20,000 harems in Italy, 30,000 harems in France, and 50,000-100,000 polygamists in the United States.
Some imams openly acknowledge conducting polygamous marriage ceremonies: Khalil Chami reports that he is asked almost weekly to conduct such ceremonies in Sydney. Aly Hindy reports having "blessed" more than 30 such nuptials in Toronto.
Social acceptance is also growing. Academics justify it, while politicians blithely meet with polygamists or declare that Westerners should "find a way to live with it" and journalists describe polygamy with empathy, sympathy, and compassion. Islamists argue polygamy's virtues and call for its official recognition.
Polygamy has made key legal advances in 2008. (For fuller details, see my blog, "Harems Accepted in the West.") At least six Western jurisdictions now permit harems on the condition that these were contracted in jurisdictions where polygamy is legal, including India and Muslim-majority countries from Indonesia to Saudi Arabia to Morocco. United Kingdom: Bigamy is punishable by up to seven years in jail but the law recognizes harems already formed in polygamy-tolerant countries. The Department of Work and Pensions pays couples up to £92.80 (US$140) a week in social benefits, and each multiculturally-named "additional spouse" receives £33.65. The Treasury states that "Where a man and a woman are married under a law which permits polygamy, and either of them has an additional spouse, the Tax Credits (Polygamous Marriages) Regulations 2003 allow them to claim tax credits as a polygamous unit." Additionally, harems may be eligible for additional housing benefits to reflect their need for larger properties.
The Netherlands: The Dutch justice minister, Ernst Hirsch Ballin, has announced that polygamous Muslim marriages should not be dealt with through the legal system but via dialogue.
Belgium: The Constitutional Court took steps to ease the reunification of harems formed outside the country.
Italy: A court in Bologna allowed a Muslim male immigrant to bring the mothers of his two children into the country on the grounds that the polygamous marriages had been legally contracted.
Australia: The Australian newspaper reports "it is illegal to enter into a polygamous marriage. But the federal government, like Britain, recognises relationships that have been legally recognised overseas, including polygamous marriages. This allows second wives and children to claim welfare and benefits."
Ontario, Canada: Canadian law calls for polygamy to be punished by a prison term but the Ontario Family Law Act accepts "a marriage that is actually or potentially polygamous, if it was celebrated in a jurisdiction whose system of law recognizes it as valid."
Thus, for the cost of two airplane tickets, Muslims potentially can evade Western laws. (One wonders when Mormons will also wake to this gambit.) Rare countries (such as Ireland) still reject harems; generally, as David Rusin of Islamist Watch notes, "governments tend to look the other way as the conjugal mores of seventh-century Arabia … take root in our backyards."
At a time when Western marriage norms are already under challenge, Muslims are testing legal loopholes and even seeking taxpayer support for multiple brides. This development has vast significance: just as the concept of one man, one woman marriage has shaped the West's economic, cultural, and political development, the advance of Islamic law (Shari‘a) will profoundly change life as we know it.
Mr. Pipes (www.DanielPipes.org) is director of the Middle East Forum and Taube distinguished visiting fellow at the Hoover Institution of Stanford University.
'The Patriarch of Liberty' Restoring Sam Adams to his rightful place among the founders
Michael C. Moynihan | November 25, 2008
When John Adams traveled to France in 1779 to confer with America's Revolutionary War allies, Parisians lamented that they would not be playing host to "the famous Adams." That title was reserved for the future president's cousin, the muckraking journalist turned zealous revolutionary, Samuel Adams.
So it is odd, then, that this Zelig of independence, present at virtually every revolutionary convulsion of early America, is now remembered mostly for lending his name to a popular brand of beer. As Ira Stoll observes in Samuel Adams: A Life, his engaging and hagiographic biography of this forgotten founding father, a name once synonymous with the American independence movement was "lost in the attic of history."
This is unfortunate, says Stoll, the former managing editor of The New York Sun, because it was Adams who acted as the "moral conscience of the American Revolution." Indeed, it was Adams who helped precipitate the revolutionary unrest, skillfully whipping up public sentiment against British attempts to tax his fellow colonists without allowing them parliamentary representation and, through his pseudonymous newspaper column, inflaming public passions following the Boston Massacre.
Adams was an early and unwavering supporter of separation from Britain, and totally uninterested in compromise or reconciliation with America's imperial masters. When King George III asked Thomas Hutchinson, the former colonial governor of Massachusetts, to provide intelligence on the situation in America, he singled out Adams as "the first that publically asserted the independency of the colonies." As a measure of Adams influence, Stoll points out that when England proffered a pardon for all citizens engaged in revolutionary activity in exchange for a cessation of violence, the only two Bostonians exempted from the deal were Adams and his friend John Hancock.
But Adams was not merely an agitator of mobs. The Massachusetts constitution (1779), which Adams "patiently navigated .ñ.ñ. through revision after revision, and then to ratification," enumerated the "natural, essential and unalienable rights" of "all men." And as Stoll notes, it not only provided the foundation upon which the federal constitution was built, but was later cited when state courts abolished slavery and legalized same-sex marriage.
Stoll argues that, for a man of his times, Adams possessed enlightened, if imperfect, views of slavery and religious liberty (excepting his fanatical anti-Catholicism), and understood that the foundation of a free society was the constitutional guarantee of private property rights. "Property rights, after all," Stoll writes, "were one of Adams's main arguments against taxation by the British." It was the one issue he stressed "almost as much as religious rights in arguing against Britain's treatment of the colonies."
But Christianity was the dominant theme of his writing. He argued strenuously that liberty and religion were inextricably linked, commenting that "whether America shall long preserve her freedom or not, will depend on her virtue" because once Americans "lose their virtue they will be ready to surrender their liberties to the first external or internal invader."
But he could also be a moral scold; at times sounding like a proto-social conservative. Adams stridently campaigned against "theatrical entertainments," inveighing against the supposedly deleterious effects of horse racing, theater-going, dancing, card playing and salty language. The curbing of such "idle amusements" was necessary, he believed, to restore virtue and to preserve revolutionary gains.
Stoll offers not only a compelling portrait of an overlooked figure, but a crisp intellectual history of the American Revolution and its main players. And he reminds readers that it was John Adams who remarked upon his cousin's death that "Without the character of Samuel Adams, the true history of the American Revolution can never be written." With Samuel Adams: A Life, Stoll has succeeded in returning the man Thomas Jefferson called "the patriarch of liberty" to his proper place in the pantheon of great revolutionaries.
Michael C. Moynihan is an associate editor at reason. This article originally appeared at The New York Post.
One of my ongoing major annoyances is that few in the MSM and few in various legislatures know squat about firearms. Indeed, one of the reasons I think the MSM and politicians are held in sure low esteem is that just about everyone can think of some local gun guru they can call and get accurate firearm information from, a guru that the MSM and politicos, however, are unable to find. This leads to all sorts of assault weapon, Black Rhino bullet, disposable machine guns damnfoolishness appearing in what claim to be reputable sources of information.
In this instance, Obamaniacs appear to be trolling for dirt related to a constitutionally protected behavior, apparently utterly unaware that most firearms in the US are unregistered. Methinks they need to get out of DC and Chicago more often:
Is Owning a Gun More or Less Embarrassing Than Hiring an Illegal Alien?
Jacob Sullum | November 24, 2008, 3:29pm
Is it significant that the 63 questions would-be Obama appointees have to answer include a query about gun ownership? Question 59, which comes right after one about "any websites that feature you in either a personal or professional capacity" and right before one about the applicant's medical condition, reads:
Do you or any members of your immediate family own a gun? If so, provide complete ownership and registration information. Has the registration ever lapsed? Please also described how and by whom it is used and whether it has been the cause of any personal injuries or property damage.
Some Second Amendment advocates have taken offense at this question, which appears to be unprecedented and which they think reflects Obama's lack of enthusiasm for gun rights. But an Obama aide told Politico "the intent of the gun question is to determine legal permitting." In other words, just as the questions about "domestic help" do not imply that anyone who has ever paid someone to clean his house can forget about working in the Obama administration, the question about firearms does not mean gun owners are automatically disqualified. Obama's transition team just wants to make sure all legal niceties have been observed, so there aren't any embarrassing surprises.
Although that sounds plausible, there are many other things an applicant might possess that raise issues of legal compliance and of damages to others but are not specifically mentioned in the questionnaire, including cars, boats, pets, swimming pools, and trampolines. The selection of guns out of all the dangerous and/or regulated things people own is telling, I think, and suggests that gun ownership itself might be deemed an embarrassment, at least past a certain threshold. Note that the question assumes no one applying for a job in the Obama administration would own more than one gun. It also assumes gun-owning job candidates would be subject to registration requirements, which apply in only a small fraction of the country.
Ambrose Bierce once defined piracy as "Commerce without its follyswaddles, just as God intended it." Alas, as this piece outlines, the current geopolitical mishmash caused by UN policy and other international dictates, makes dealing with piracy so complicated that many nations are choosing to kick the convoluted can down the road.
Column One: Civilization walks the plank Nov. 20, 2008 Caroline Glick , THE JERUSALEM POST
A Somali pirate and a former US defense secretary are flying to London for vacation. One of them is stopped at immigration at Heathrow airport and arrested on suspicion of committing war crimes. Which one do you think it was?
On Tuesday, Somali pirates, sailing in little more than motorized bathtubs, armed with automatic rifles and rocket-propelled grenades, and sustained by raw fish and narcotics, successfully hijacked the Sirius Star, a Saudi-owned oil tanker the size of a US aircraft carrier. The tanker was carrying some $100 million worth of crude oil. News of its capture caused global oil prices to rise by a dollar a barrel.
The next day, Somali pirates attempted to hijack the Trafalgar, a British frigate, but were forced to flee by a German naval helicopter dispatched to the scene. They did manage to hijack a Chinese trawler and a cargo ship from Hong Kong. They nearly got control of an Ethiopian ship, but it, too, was saved by the German Navy that heeded its call for help in time.
Piracy is fast emerging as the newest old threat to stage a comeback in recent years. Over the past week and a half alone, 12 vessels have been hijacked. And according to the International Maritime Bureau, in the three months that ended on September 30, Somali pirates attacked 26 vessels, capturing 576 crew members. Britain's Chatham House (the Royal Institute of International Affairs) assesses the ransoms they netted at between $18m. and $30m.
And with financial strength comes increased military sophistication. The US Navy expressed shock at the pirates' successful hijacking of the Sirius Star. The pirates staged the hijacking much farther from shore than they had ever done previously.
Beyond the personal suffering incurred by thousands of crew members taken hostage in recent years, piracy's potential impact on global economic stability is enormous. In the Gulf of Aden, where the Somali pirates operate, US shippers alone transport more than $1.5 trillion in cargo annually.
One of the unique characteristics of pirates is that they appear to be equal opportunity aggressors. They don't care who owns the ships they attack. On August 21, Somali pirates hijacked the Iran Deyanat, a ship owned and operated by the Iranian Revolutionary Guards-linked Islamic Republic of Iran Shipping Line (IRISL). In September, the US Treasury Department designated IRISL as a company that assists Iran's nuclear weapons program and placed it under stiff financial sanctions.
Iran Deyanat's manifest asserted that its cargo included minerals. Yet shortly after the pirates went on board they began developing symptoms such as hair loss that experts claim are more in line with radiation exposure. According to reports, some 16 pirates died shortly after being exposed to the cargo. Just this week, a second Iranian ship - this one apparently carrying wheat - was similarly captured.
Then, too, in September, pirates seized the Faina, a Ukrainian ship carrying 33 Russian-made T-72 tanks. The Ukrainians and Russians claimed that the tanks were destined for Kenya, but it later emerged that they may have been seized en route to Sudan. So, ironically, in the case of both the Faina and the Deyanat, pirates may have inadvertently saved thousands of lives.
THE INTERNATIONAL community is at a loss for what to do about the emerging danger of piracy. This is not due to lack of capacity to fight the pirate ships. On Monday an Indian naval frigate, the INS Tabar, sank a pirate "mother ship" whose fleet members were attacking the Tabar in the Gulf of Aden. NATO has deployed a naval task force while the American, French, German and other navies have aggressively worked to free merchant ships under attack by pirates.
As David Rivkin and Lee Casey explained in The Wall Street Journal on Wednesday, the problem with contending with piracy is not so much military, as legal and political. Whereas customary international law defined piracy as a threat against all nations and therefore a crime for which universal jurisdiction must be applied to perpetrators, in today's world, states are unwilling to apprehend pirates or to contend with them because they are likely to find themselves in a sticky legal mess.
In centuries past, in accordance with established international law, it was standard practice for naval captains to hang pirates after capturing them. Today, when Europe has outlawed capital punishment, when criminal defendants throughout the West are given more civil rights than their victims, and when irregular combatants picked off of battlefields or intercepted before they attack are given - at a minimum - the same rights as those accorded to legal prisoners of war, states lack the political will and the moral clarity to prosecute offenders. As Casey and Rivkin note, last April the British Foreign Office instructed the British Navy not to apprehend pirates lest they claim that their human rights were harmed, and request and receive asylum in Britain.
THE WEST'S perverse interpretations of human rights and humanitarian law, which bar it from handling one of the most acute emerging threats to the international economy, is a consequence of the West's abdication of moral and legal sanity in its dealings with international terror. In the 1960s and 1970s, when international terrorism first emerged as a threat to international security, the West adopted international treaties and conventions that tended to treat terrorism as a new form of piracy. Like piracy, terrorism was to be treated as an attack on all nations. Jurisdiction over terrorists was to be universal. Such early views were codified in early documents such as the Convention for the Suppression of Unlawful Seizure of Aircraft from 1970 that established a principle of universal jurisdiction over aircraft hijackers.
Similarly, in the wake of the September 11 attacks on the US, the UN Security Council passed binding Resolution 1373, which also compelled member states not only to treat terrorists as illegal combatants who must be universally denied any support of any kind, but to take action against anyone involved with or supporting terrorists in any way. That is, as in piracy, the tendency of states contending with terrorism has been to view it as an act requiring universal jurisdiction, compelling all UN member states to prosecute offenders.
And yet, over the years, states have managed to ignore or invert international laws on terrorism to the point where today terrorists are among the most protected groups of individuals in the world. Due to political sympathy for terrorists, hostility toward their victims, or fear of terrorist reprisals against a state that dares to prosecute terrorists found on its territory, states have managed to avoid not only applying existing laws against terrorists. They have also refrained from updating laws to meet the growing challenges of terrorism. Instead, international institutions and "enlightened" Western states have devoted their time to condemning and threatening to prosecute the few states that have taken action against terrorists.
The inversion of international law from an institution geared toward protecting states and civilians from international lawbreakers to one devoted to protecting international menaces from states and their citizens is nowhere more evident than in the international community's treatment of Hamas-controlled Gaza.
One of the reasons the international community has failed so abjectly to take reasonable measures to combat terrorism is because international terrorism as presently constituted is the creation of Palestinian Arabs and their Arab brethren. Since the 1960s, and particularly since the mid-1970s, Europe, and to varying degrees the US, have been averse to contending with terrorism because their hostility toward Israel leads them to condone Palestinian Arab terrorism against the Jewish state.
THE INTERNATIONAL community's treatment of Hamas-controlled Gaza epitomizes this victory of politics over law. Both the US and the EU have labeled Hamas a terror group. That designation places Gaza, which is controlled by Hamas, under the regime of UN Security Council Resolution 1373.
Among other things, Resolution 1373 requires states to "freeze without delay funds and other financial assets or economic resources of... entities owned or controlled directly or indirectly by [terrorists]."
That is, the resolution requires UN member states to end all financial and other support for Hamas-controlled Gaza.
The resolution also requires UN member states to "cooperate [with other states] to prevent and suppress terrorist attacks and take action against perpetrators of such acts."
This means that states are required to assist one another - and in the case of Hamas, to assist Israel - in combating Hamas and punishing its members and supporters.
While it can be argued that given the absence of a binding legal definition of terrorism, states that do not designate Hamas as a terrorist organization are not required to abide by the terms of 1373 in dealing with Hamas, it is quite clear that for states that do recognize Hamas as a terror group, 1373's provisions must be upheld.
And yet, the EU and the US have willfully ignored its provisions. They have steadily increased their budgetary support for the Palestinian Authority while knowing full well that the Fatah-led PA in Judea and Samaria is transferring money to Hamas-controlled Gaza to pay the salaries of Hamas employees.
More disturbingly, the US and the EU as well as the UN demand that Israel itself sustain Hamas-controlled Gaza economically. The UN, EU and the US have consistently demanded that Israel provide Gaza with fuel, food, water, medicine, electricity, telephone service, port services and access to Israeli markets, in spite of the fact that international law actually prohibits Israel from providing such assistance, and in fact arguably requires Israel to deny it.
Recently, supported by the UN, and in connivance with Hamas, European leaders began supporting illegal moves to end Israel's maritime blockade of Gaza, which was established to block weapons and terror personnel from entering and exiting the area. Expanding this trend, this week Navanethem Pillay, the UN's High Commissioner for Human Rights, called for Israel to end its blockade of the Gaza Strip, perversely calling the blockade a breach of international and humanitarian law.
This inversion of the aims of international law - from protecting states and innocent civilians from attack to protecting aggressors from retaliation - has brought about the absurd situation where terrorist ideologues and commanders such as Sheikh Yusuf Qaradawi are feted in Britain while retired Israeli and American generals are threatened with arrest. Germany welcomed Iranian President and genocide proponent Mahmoud Ahmadinejad to visit and indicted former US defense secretary Donald Rumsfeld for crimes against humanity. Belgium allows Hamas and Hizbullah supporters like Dyab Abu Jahjah, who calls for attacks against Jews, to operate freely, but indicted former prime minister Ariel Sharon for crimes against humanity.
The consequence of this absurd state of affairs is obvious. The international law champions who argue that international humanitarian law provides a nonviolent means for nations to defend themselves against aggressors have perverted the purpose and meaning of international humanitarian law to such a degree that the only way for nations to protect themselves against pirates, terrorists and other international rogues is to ignore international law aficionados and secure their interests by force.
THEODORE DALRYMPLE Slouching Toward Fanaticism Passionate intensity, but little rationality, in the anti-immunization movement 14 November 2008 Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure, by Paul A. Offit (Columbia University Press, 328 pp., $24.95)
For some reason, the immunization of children has always aroused opposition of almost religious fervor. For example, a mass movement led resistance to smallpox vaccination in Britain for 70 years and was supported by intellectuals of the stature of George Bernard Shaw, who never believed in the germ theory of disease and thought that Pasteur and Lister were charlatans. Politicians have won or lost elections on their attitude to vaccination. And the extensive literature produced by the antivaccination movement attributed virtually every human ill, from general failure to thrive to the recrudescence of leprosy, to the practice. The movement also imputed the worst possible motives to vaccinators, including Edward Jenner himself, the developer of the smallpox vaccine.
Fears about immunization have reappeared with monotonous regularity. Perhaps it is the medical and social pressure to immunize that stirs up such opposition, especially in countries that pride themselves on their sturdy individualism. And while everyone agrees that prevention is better than cure, a single case of a complication wrought by immunization has more emotional impact than a million cases prevented. The former, after all, is a definite presence, the latter a ghostly absence.
The combined measles, mumps, and rubella vaccine is the latest to act as a lightning conductor for parental discontent. Paul Offit’s new book, as readable as a good detective novel, tells the story of how autism, a disorder of psychological development, came falsely to be blamed first on the MMR vaccine and then on thimerosal, a preservative found in several vaccines. It is a tale about bad science, worse journalism, unscrupulous political populism, and profiteering litigation lawyers.
In 1998, a young British surgeon named Andrew Wakefield published a paper in The Lancet suggesting an association between the measles component of the triple vaccine and the development of childhood autism. Though the paper stressed that no causative relationship had been proved, Wakefield took the most unusual (and self-promoting) step of calling a press conference, in which he suggested that the vaccine should be withdrawn. Panic ensued, immunization rates declined, and measles made a comeback in Britain. The panic spread across the Atlantic.
Wakefield’s paper, though, was a very bad one, and the editor of The Lancet—one of the most prestigious medical journals in the world—should never have countenanced publication of such rotten science. The ensuing uproar made necessary expensive and time-consuming epidemiological research that repeatedly failed to find any connection between the vaccine and autism. It seems likely, moreover, that Wakefield knowingly falsified some of his results. Those that he did not falsify were based on grossly deficient laboratory technique. An investigative journalist discovered a few years later that Wakefield had received payments from a serial litigation lawyer who hoped to mount a class-action suit against the vaccine’s manufacturers. Despite all this, Wakefield still has faithful followers, as do all false messiahs who survive their own predictions of the end of the world.
Closely allied with the MMR theory is the contention that thimerosal in vaccines causes autism. There is not the slightest evidence in favor of this conclusion, but it, too, has devoted believers. Parents who first notice their children’s autistic traits soon after immunization with MMR are understandably difficult to persuade that their experience is of almost no value in deciding the question of causation. Where two events, such as MMR vaccination and the development of autistic traits, are common, it is inevitable that people will mistakenly associate them with each other. But it is shameful that politicians and journalists should fail to understand this fairly simple point. I cannot make up my mind whether it would be worse if the politicians were merely cynical or actually ineducable.
False theories of causation are apt to call forth absurd or even dangerous methods of cure, and so it was in this instance. Children have been assaulted—often at great expense—with a host of special diets and medicaments in the hope of cure. One can only sympathize with the desperate parents, eager to clutch at any straw to find a satisfying explanation for the misfortune that has befallen them. But for the false messiahs, at best self-deceived and at worst outright fraudulent (and sometimes, I suspect, a little of both), one can feel nothing but outrage.
Offit’s book raises questions much broader than his ostensibly limited subject matter would suggest. What is the place of scientific and scholarly authority in the modern world, and how is it to be institutionalized in a democracy? Is it inevitable that the best should now lack all conviction while the worst are full of passionate intensity? What is the relation between information, on the one hand, and knowledge and wisdom, on the other? Cranks are often oversupplied with the former and deficient in the latter, not realizing that there is a difference between the two. Autism’s False Prophets gives no easy answers, but it does provide a rich source of material for political philosophers and even epistemologists, who ought to assign it to their students.
A final note on the question of passionate intensity: Offit, a prominent public defender of child immunization (who recognizes that, as with any medical treatment, it can sometimes have harmful effects), has been persecuted and threatened by activists who disagree with him. He begins his book with the startling statement, “I get a lot of hate mail.” He has been branded on some websites as a terrorist, and he has sometimes needed police protection. I found out firsthand how deep the passions against him run when I published a positive review of one of his previous books and received abusive e-mails in reply. Readers accused me not merely of error, but of complicity in corruption and depravity. This is surely extraordinary.
Theodore Dalrymple, a physician, is a contributing editor of City Journal and the Dietrich Weismann Fellow at the Manhattan Institute.
Progressives and Obama's Acceptable Blackness American Thinker ^ | November 22, 2008 | Miguel A. Guanipa Posted on November 22, 2008
On the eve of Clarence Thomas' Supreme Court nomination, a cry went out throughout the land. It was that of fire breathing liberals who could not believe that a conservative president would dare appoint a conservative judge to the highest court in the nation. Although the president had picked someone of African-American descent, who was also more than qualified to fulfill the role of Supreme Court Justice, such considerations were swiftly trumped by the fact that Mr.Thomas was not pledged to walk in lockstep with the abiding progressive weltanschauung; in other words, he was not viewed by liberals as the "right" kind of black man.
Likewise, many from the left frowned disappointingly at President George W. Bush's choice of General Colin Powell for the office of Secretary of State. Some even referred to Mr. Powell as a modern day Uncle Tom, congenially submissive and overly accommodating to the erratic whims of his war hungry superiors. The same Colin Powell was not long ago lauded by the main stream media for his ringing endorsement of now president elect Barack Obama.
Another promising black American named Condoleezza Rice, who later assumed the role vacated by General Powell, also had to endure condemnation from the acerbic tongues of the liberal elite, who simply could not countenance another breach of their presumed monopoly on diversity by an impudent Republican president. Moreover, like her predecessor, the newly appointed Secretary of State also appeared equally comfortable with the preemptory offense rationale upheld by the same hubristic regime
And so progressives -- who periodically like to commend themselves for the sincerity of their empathy with the plight of all minorities -- have been mostly engaged in unsuccessful attempts to frustrate and overturn what otherwise, could have been hailed as truly historic appointments. Ironically, their most notable accomplishment is that in the process, they have robed the black community of rare opportunities to celebrate some rather significant milestones in this country's heartbreaking journey of race relations.
Now that a black man, with decidedly leftist fringe credentials, has been chosen as future president of the United States, liberals fancy themselves playing a part in the shaping of history. In concert with their anointed figure head, they successfully orchestrated what they deemed to be the proper conditions under which African-Americans are granted the opportunity -- permission? -- to rejoice in the advancement of one of their own. More importantly, the diligent -- and at times inglorious -- efforts of that self-congratulatory aristocracy of closet anarchists known as progressives, have also yielded what the media gleefully proclaims is a widespread assent to their utopian social compact.
But as with every grandiose vision, there is never a want of colossal ironies.
Consider the Freedom of Choice Act, which Barack Obama has promised to sign into law as soon as he takes office. Its innocuous name belies the fact that F.O.C.A. will aggressively seek to standardize unrestricted national access to abortion on demand. Planned and unplanned babies safely residing in the womb, and at any stage of development, will be legally stripped of their status as persons. As such, they will not enjoy the constitutional benefit of protection from the state.
Our founders naively allowed a derivative ontological exception in order to justify the enslavement of another group of voiceless citizens. And this irony is compounded by the cruelly indifferent statistics which report that the vast majority of unborn children, aborted daily in this country in the name of choice, are disproportionably representative of the same demographic group from which Obama enjoyed the most enthusiastic support during this past election season.
Secondly, Obama will be taken to task by his equally extremist peers in congress, for a swift passage of the so called Fairness Doctrine. As with most pieces of legislation Democrats seem to have a penchant for crafting, this one is ostensibly worded to deliver exactly the opposite effect of what its title purports to champion.
In short, public air radio broadcasters will be required, at their own expense, to grant equal time to those in the opposition for retaliatory expositions of their views, despite the fact that historically, such views have failed to guarantee sustained interest from a sponsorship willing audience. As it is a well known fact that conservatives dominate in the medium, what is billed as an equal opportunity for all voices to be heard, is simply a targeted attempt by progressives to gradually silence the kind of free speech they find personally objectionable.
The irony here is that this foreboding development constitutes a betrayal of one of the most foundational freedoms guaranteed by the constitution, which again, the once disenfranchised ancestral kin of Obama's most loyal contingency plainly understood, having endured personal battles for the right to speak in a free society without fear of retribution.
And finally, the crown jewel of Obama's looming progressive initiatives: compulsory redistribution of wealth from the few "haves" to the many "have-nots", succinctly outlined for one "Joe the Plumber" in a rare pre-coronation unscripted moment.
Intended to assuage economic disparities, such schemes only tend to inflame social tensions, especially amongst those who rightly perceive their roles as equal participants in what is admittedly a less than perfect system. With any luck, the measure will not be a catalyst to something Obama -- I think -- wants to avoid: the furious resurgence of racially motivated class warfare.
But Obama's ideological entrenchment in what is nothing more than a socialist template, may have blinded him to the fact that he has been presented with a unique leadership opportunity -- as the first African-American elected to office -- to promote the time tested principle that equal participation generally means an equal stake in prosperity and advancement. Ironically (again) this is a principle for which Obama's own personal journey, and that of those who came before him - under the auspices of a Republican administration no less -- present a rather compelling case.
It was, after all, Obama himself who once declared that he did not wish for people to elect him simply because of the color of his skin. He may rest assured that that is the least of the reasons why progressives -- who will soon be requiring that their agendas be expeditiously implemented - have seen to it that he become their leader.
10,000 Britons die needlessly every year as GPs with out-of-date training miss vital cancer symptoms
By Daniel Martin Last updated at 1:32 AM on 21st November 2008
GPs often do not send enough patients for tests More than 10,000 people die needlessly each year because their cancers are not diagnosed in time, a study says.
The charity Cancer Research UK found GPs too often miss symptoms or do not send enough patients for tests.
In some cases their training is simply out of date. The report says some people are deterred from seeking treatment by the difficulty of getting an appointment.
And there is too little public awareness about cancer symptoms, meaning many victims do not see their GP until it is too late to save their lives. The result is that Britain's survival rates for cancer are still the worst in Western Europe, despite the billions poured into the Health Service by Labour.
Only 53 per cent of women and 42 per cent of men with cancer survive for more than five years.
Of 14 major countries compared by the charity, Britain came 11th for women and 12th for men, alongside Poland and Slovenia. If our rates were as good as the best in Europe, the report says, there would be 10,744 fewer deaths a year.
Lead researcher Professor Michael Coleman said: 'We know many cases are being diagnosed too late and this is a major reason for our poor survival rates.'
He said many GPs were not up to date on cancer treatment, and family doctors with an average practice size saw only around eight new cancer cases a year.
'Some GPs would benefit from guidance on identifying patients more successfully,' he said.
Another problem was access, said Professor Coleman. 'Patients find it difficult to make appointments or park their cars, and many are worried about taking time off work and losing money.'
Only a half of GP practices see patients outside working hours - and even these open for an average of only three more hours a week.
Survival rates The failure of GPs comes despite their pay soaring by more than 50 per cent - to over £100,000 - since a new contract was agreed in 2004.
They are also working fewer hours a week.
Better survival rates in Europe are partly due to the fact that patients in many countries can have direct access to a specialist, while in Britain they must go through their GP.
The Government's cancer 'czar', Professor Mike Richards, said: 'We want to work with GPs to find out which patients and which symptoms they are most likely to miss. They need to be more alert and send people for tests much earlier.'
Britain's poor record has also been blamed on drug rationing by NICE - which can take up to 18 months to decides whether the NHS should fund new treatments - and low spending on cancer drugs, £76 a head a year, compared to £143 in Germany and £121 in France.
Professor Karol Sikora, professor of cancer medicine at London's Imperial College, said last night the low survival rates were a failure of the whole NHS, not just GPs. He said: 'People have to wait too long for scans and biopsies. There is undercapacity in radiography and chemotherapy.
'We don't get access to the drugs they get in Europe. Huge amounts of money have been thrown at cancer over the past decade so it is surprising to see these problems are still here.
'The main culprit is the NHS itself - it's a bureaucratic monolith.'
The more striking anomaly in the OPR that cries out for explanation is that the gap between reported income and personal spending has widened sharply. Between 1960–61 and 2005, the ratio of income to expenditures for all households remained fairly stable, with expenditures exceeding income by a significant amount. The subgroup of poorer households also seemingly overspent. But in contrast to households in general, the margin by which the spending of the poor exceeded reported income has moved steadily upward. In the early 1960s, the ratio was 1.12 for the lowest income quartile. By 1972–73, that ratio had reached 1.4 for the lowest income quintile. (Note again the lack of data on precisely comparable groups.) By 2005, the ratio for the bottom fifth had reached 1.98.
What accounts for the gap? One possible hypothesis is that low-income Americans are overspending at an increasing rate--that is, going ever deeper into debt. By this reasoning, the widening gap represents an unsustainable binge that must eventually come to an end, with doleful consequences for future living standards of the disadvantaged.
The overspending hypothesis, on its face, seems plausible. Luckily, it is confuted by evidence from U.S. Census Bureau and Federal Reserve surveys showing that the average net worth of households in the bottom fifth has actually grown in the last decade. Additionally, the gains in wealth have been broadly shared, with the portion of bottom fifth households reporting no assets whatever falling from 21 percent in 1989 to just 8 percent in 2004.
If the poor are not overspending, is it possible they are underreporting income? There is little doubt. For one thing, the OPR measure of income ignores tens of billions in tax rebates delivered by the Earned Income Tax Credit. By the same token, the poor surely supplement their incomes off the books. But to use the underreporting phenomenon to explain why the gap between spending and income has widened so much, one would also need to explain why underreporting was increasing rapidly. Hence, the explanation for the widening gap more likely lies elsewhere.
Might the growing disparity be explained by another big change in modern America--namely, the rise of illegal immigration? The argument would go like this: there has been a surge of undocumented immigration over the past generation, and illegal immigrants are understandably inclined to underreport their incomes. Yet they have no similar incentives to underreport their consumption in interview-based surveys. Thus, all other things equal, as the undocumented become an ever greater proportion of the lower-income population, the gap between spending and reported income should grow. (Immigrants, furthermore, tend to be savers--think remittance flows--a fact that could also help account for the reported increases in wealth among the poor in recent decades.)
The argument is plausible, but the actual magnitude of the effect is likely to be small. Undocumented immigrants are believed to comprise less than 3 percent of all U.S. residents. Moreover, they are probably undersampled by the survey techniques used by the Census Bureau to estimate the poverty rate.
But even if all illegal immigrants were fully represented in our income and expenditure surveys, if there had been no illegal immigrants in the country in the early 1970s, if all illegal immigrants now fall in the lowest quintile of the U.S. income distribution, and if this entire group reported no income at all, the illegal immigration effect could account for less than half of the rise in the ratio of spending to income that was actually reported by the bottom fifth of American households between 1972 and 2005. In reality, the impact is probably much smaller. If we want to understand the uncanny continuing divergence between reported spending levels and reported income levels for the lowest fifth of American households, then we are going to have to look into the economic circumstances of legal American residents--the overwhelming majority of whom are native-born.
To see where we are heading, note that poverty status is not a fixed, long-term condition for the overwhelming majority of Americans who are ever designated as poor. Quite the contrary: long-term poverty appears to be the lot of only a tiny minority counted as poor in any particular year by the OPR. For example, the Census Bureau found that from 1996 to 1999, fully 34 percent of all households spent two months or more below the poverty line--but only 2 percent stayed below the line in all forty-eight of those months. Both economic theory and common sense suggest that the temporarily poor would try to maintain their living standards in lean times by spending more than they earn.
But this alone would not explain why the spending-income gap increased so much in recent decades. What is needed is a reason to believe that household incomes are more variable than they used to be, sharply increasing the portion of the materially disadvantaged that are only temporarily poor. Here, the accumulating evidence is intriguing. For example, Jacob Hacker of Yale University found that, for households headed by people of working age, the odds of seeing income fall by half or more in the coming year rose from 7 percent in 1970 to 16 percent in 2002.
This unintuitive explanation for the growing income-consumption gap meshes neatly with another surprising bit of survey data. According to the Federal Reserve, differences in net worth for the bottom quintile and the next highest quintile of American families narrowed between 1989 and 2004--hardly what one would expect in light of evidence of growing income inequality since the 1960s. If year-to-year income volatility were on the rise, however, we would expect an increasing share of families permanently lodged in the second quintile to register temporarily in the bottom quintile in any given year--and conversely, we would also expect a rising share of lowest-quintile families to bounce up to the second quintile in any given year. Rising income volatility, in short, could be a key to explaining the seemingly paradoxical behavior of lower-income households with respect to both spending and the accumulation of wealth.
This is, arguably, both good news and bad. On the one hand, it suggests that lower-income Americans are not spending themselves into oblivion. On the other, it implies that income volatility is a large and growing concern for ever more Americans at the short end of the income stick.
What It All Means
We would surely discard a statistical measure that showed life expectancy was falling during a time of ever-increasing longevity, or one that suggested our national finances were balanced in a period of rising budget deficits. Central as the OPR has become to antipoverty policies--or, more precisely, especially because of its central role in such policies--it should likewise be discarded in favor of a more accurate way (or ways) of describing trends in material deprivation.
Do not misinterpret this dismissal of the OPR. Nothing about the analysis here leads me to conclude that poverty is a thing of the past--or even that the general plight of the poor in America is markedly better today than in 1965 when Lyndon Johnson's "War on Poverty" was ramping up. To the contrary, I think that in many tragic respects, the misery and degradation suffered by America's most disadvantaged may well be more acute today than it was forty years ago.
For example, no matter how you measure it, family structure is far more frayed than it was in 1965. While the consequences of family breakdown are seldom auspicious, they tend to be most severe for the poor. By the same token, despite the past decade and a half of decline in major urban areas, crime rates in America remain far higher today than in 1965. And it is no secret that the greatest burden of crime falls directly on the very poorest. The corollary of that crime explosion--today's historically unprecedented levels of prison incarceration for the country's young men--not only reflects on misery in modern America, but contributes to it.
Nor does this study suggest that America's long war on poverty has been a failure. It does not even attempt an overall assessment of the material impact of those policies. At the risk of beating a dead horse, the only issue here is the reliability of the OPR as a measure of poverty. And, whereas the rate shows no progress in reducing poverty over the past three and a half decades, practically every other available statistical indicator points to major improvements in material living standards.
Accommodating such findings will require a fairly major recasting of the conventional narrative about long-term progress against poverty. Yet rethinking what has happened is hardly likely to end disputes over the value of the welfare reform plan of the 1990s or of the efficiency of the government's antipoverty programs. Our contentious, ongoing national debate about the adequacy and efficacy of the social safety net is at its core a dispute over first principles and underlying premises--not the simple facts of how many people go hungry or lack access to a telephone.
This study points to some promising avenues for inquiry in the years ahead. Consumption is, without doubt, a more faithful measure of material deprivation than income. Further, the complex (and, for our purposes, crucial) interplay between consumption and income can be much better captured by surveys that track specific households or individuals over time than by "snapshot" measures at a single date. Yet the government's capacity to follow the long-term dynamics of household income and consumption in America is woefully limited--a curious oversight for an information-rich society.
Equally curious is the fact that the first efforts to create an alternative poverty metric are not coming from federal antipoverty agencies, but from New York City, whose mayor, Michael R. Bloomberg, decided that the OPR was all but useless in determining how to spend the city's limited antipoverty resources. So New York City created a new poverty measure. It does not tackle the inherently difficult problem of accounting for year-to-year variability in individual household resources, but it is plainly a step up because it focuses on consumption rather than income.
The first results, released in mid-July, are provocative. The new measure implies that 23 percent of New York's population is poor, as opposed to the OPR's 19 percent. But a much smaller proportion of the recorded poor is shown to be in extreme poverty because the new measure takes into account food stamps and housing subsidies. Conversely, the number of elderly poor is much higher (32 percent) than previously recognized, apparently because of increases in the cost of medical services not covered by government insurance. For now, the New York City metric offers only a snapshot of poverty--a picture of a single point in time--but it should be possible to use this framework to estimate long-term poverty trends.
On the same day New York released its first estimates, the House Ways and Means Subcommittee on Income Security and Family Support held hearings to explore the idea of modernizing the federal measurement of poverty. The subcommittee's chairman, Jim McDermott (D-Wash.), has submitted a bill calling for a new consumption-based metric for poverty derived from decade-old recommendations from the National Academy of Sciences. As with the New York City approach, it does not account for the role of year-to-year income variability in poverty. But it is a start: the process of rethinking the federal government's obsolete official poverty measure has begun.
Nicholas Eberstadt is the Henry Wendt Scholar in Political Economy at AEI.
The original piece, linked below, contained numerous tables.
The Poverty of the Official Poverty Rate
By Nicholas Eberstadt Posted: Wednesday, November 12, 2008
The proportion of people living in poverty has increased slightly since the 1970s, according to the official poverty measure. Since the "poverty threshold" used for counting the poor is fixed and unchanging, those numbers suggest a disturbing rise in absolute want. But data on household spending show consumption growth even for those with low incomes. The official poverty rate is seriously flawed, and many analysts are ready to scrap it.
Washington regularly collects vast amounts of data for hundreds upon hundreds of social and economic indicators bearing on poverty. But within that compendium, a single number is widely taken to be more important than the others--the so-called official poverty rate (OPR), which is based on the federal poverty measure established in the 1960s. For four decades, that rate has served as the benchmark for both policy analysis and public discourse regarding the national struggle to reduce the deprivation in our midst. Yet even a casual examination shows that this metric is deeply flawed and increasingly biased toward the overestimation of material poverty.
While the OPR numbers say that the proportion of the American population living in poverty has changed little--indeed, has slightly increased--since the early 1970s, data on household spending show substantial and continuing growth in consumption among those reporting very low incomes. Indeed, it is becoming increasingly clear that the OPR is of no help in figuring where we are today or even where we have come from. Signs are finally on the horizon that analysts on both the left and the right are prepared to scrap the official rate in favor of more realistic ways to track poverty.
A Little History
The poverty rate measure was introduced in 1965 in a landmark study by Mollie Orshansky, an economist and statistician at the Social Security Administration. Drawing on her own research, in which she had experimented with using household income thresholds to identify children living in impoverished conditions, Orshansky proposed a set of income criteria for setting a poverty threshold and determining who lived below it.
Orshansky's threshold was essentially a multiple of the cost of a nutritionally adequate--though humble--diet. For the food-budget anchor, Orshansky used the U.S. Department of Agriculture's "economy food plan," the lower of its two budgets for nonfarm families of modest means. She then applied a multiplier of roughly three--the number varied with family size--to calculate household poverty thresholds. The multiplier itself, incidentally, came from statistics on the ratio of after-tax income to food budgets for all Americans in the 1950s.
Since the early 1970s--the long decades of stagnation in the OPR--the correspondence between the statistic and median family income appears to have broken down altogether.
Using these new poverty thresholds, along with census data on income, Orshansky calculated the total population living below the poverty line for the United States as a whole, as well as for demographic subgroups, for 1963. Although Orshansky's study did not employ the term "poverty rate," talking instead about the "incidence of poverty," the term quickly came to mean the proportion of people or families below the poverty line.
Where's the Beef?
Little has changed since 1965 in the way the federal government measures poverty. The OPR is still calculated annually on the basis of poverty thresholds adjusted for inflation. The rate is and always has been a measure of absolute material poverty--one that intentionally ignores changes in the culture and the economy that influence popular perceptions of what constitutes deprivation.
Estimates of the OPR for the United States are thus available for the past forty-eight years. At first, they gratifyingly tracked the expectations of those who assumed that the rising tide of economic growth would carry all boats. The rate fell by roughly half between the late 1950s and the late 1960s for both families and individuals.
Strikingly, however, the numbers suggest virtually no improvement since then. The lowest OPR yet recorded was for 1973, when the index bottomed out at 11.1 percent. The OPR has since declined for older Americans, for people living alone, and for African Americans. But for most demographic slices--children under eighteen, families, and non-Hispanic whites--the OPR was higher at the start of the new century than it had been in the early 1970s. Low-income Hispanics were somewhat better off in 2006 than in 1973, but the difference is distressingly modest.
To go by the OPR, then, America, through three decades of both Democratic and Republican administrations, has utterly failed to improve the material lot of the more vulnerable elements of society--to raise them above the income line where, according to the author of the federal poverty measure, "everyday living implied choosing between an adequate diet of the most economical sort and some other necessity, because there was not money enough to have both."
Remember the all-boats-rising thesis? Experts on poverty long held that the OPR is largely driven by macroeconomic conditions--employment opportunities and wage rates. In a series of influential publications in the mid-1980s, for example, David Ellwood and Lawrence Summers of Harvard found that "almost all of the variation in the measured poverty rate is tracked by movements in median family income."
But their work only covered 1959–83. Since the early 1970s--the long decades of stagnation in the OPR--the correspondence between the statistic and median family income appears to have broken down altogether. In fact, over the past three-plus decades, data on the median income for American families have provided no clue to the OPR.
Additionally, since 1973, the behavior of the OPR looks increasingly aberrant when compared to other indices widely thought to bear on the risk of poverty in a modern urbanized society. In 1973, nearly 40 percent of adults over the age of twenty-five lacked a high school degree; by 2001, the figure was under 16 percent. Or consider trends in means-tested benefit programs--food stamps, housing subsidies, Medicaid, the Earned Income Tax Credit, and other programs that benefit the poor. Between the 1973 and 2001 fiscal years, spending on those programs more than tripled from $163 billion to $507 billion (in 2004 dollars) and increased by over 130 percent in real, per-capita terms.
The simplest and most plausible explanation for these seeming contradictions is that something is seriously wrong with the way the OPR is calculated. A variety of minor and major technical problems have been noted by specialists over the years--among them, the method for adjusting for inflation and the use of the food budget as the sole benchmark for income sufficiency. One other defect, however, fundamentally flaws the current approach.
The Income-Consumption Mystery
The rate calculation implicitly assumes that consumption by low-income Americans is accurately tracked by their reported incomes. In fact, there is good evidence that, for the lowest fifth of Americans on the income ladder, reported expenditures are almost twice their incomes.
Correcting for changes in household size, real expenditures per person for all Americans were 110 percent higher in 2005 than in 1960–61 for the country as a whole. We do not have precisely comparable figures for poor households, but we do know that the real expenditures of the poorest fifth of households were 112 percent higher in 2005 than the expenditures of the poorest fourth in 1960–61.
Put simply, consumption in low-income households has grown even faster than that of average American households. Moreover, other statistical evidence confirms that lower-income Americans are doing far better than the stagnation in the OPR suggests.
Food and Nutrition. In the early 1960s, inadequate caloric intake was hardly unusual among the officially defined poor. By the end of the century, however, the proportion of the adult population between twenty and seventy-four who were underweight (defined as a body mass index below 18.5) dropped from 4 percent to 1.9 percent.
By the same token, nutritional deprivation among children has been declining. According to the Centers for Disease Control and Prevention, the percentage of low-income children younger than five who were underweight dropped from 8 percent in 1973 to under 5 percent in 2005. (In the same period, the OPR for children rose from 14.4 percent to 17.6 percent.)
Housing and Home Appliances. In 1970, about 14 percent of poverty-level households were officially deemed "overcrowded," with more people than rooms to live in. By 2001, just 6 percent of poor households were overcrowded--a proportion lower than for nonpoor households as recently as 1970. Moreover, between 1980 and 2001, heated floor space per person in the homes of the officially poor increased by 27 percent. And in 2001, just 2.5 percent of poverty-level households lacked plumbing facilities--a lower share than for nonpoor households in 1970.
Trends in furnishings and appurtenances tell the same story: poor households' possession of modern conveniences has been growing rapidly. For many of these items--telephones, television sets, central air conditioning, and microwave ovens--prevalence in poverty-level households in 2001 exceeded that of median-income households in 1980.
Personal Transportation. In 1973, almost three-fifths of the households in the lowest income quintile lacked a car. In 2003, by contrast, over three-fifths of poverty-level households owned one or more cars. In that same year, moreover, 14 percent of households below the poverty line owned two or more cars, and 7 percent had two or more trucks.
Health Care. Between 1970 and 2004, the infant mortality rate fell by a remarkable two-thirds. And it continued its almost uninterrupted decline after 1973, even as the OPR for children began to rise. The disconnect is particularly striking for white infants. Between 1974 and 2004, their mortality rate fell by three-fifths, from 14.8 deaths per thousand to 5.7 deaths per thousand. Yet the OPR for white children rose from 11.2 percent to 14.3 percent.
The gains in access to medical care for infants extend to older children. The proportion of children who did not report a visit to a physician was significantly lower for the poor population in 2004 (12 percent) than it had been for the nonpoor population twenty-two years earlier (17.6 percent).
[David Kopel, November 20, 2008 at 7:41pm] Trackbacks
Eric Holder on firearms policy:
Earlier this year, Eric Holder--along with Janet Reno and several other former officials from the Clinton Department of Justice--co-signed an amicus brief in District of Columbia v. Heller. The brief was filed in support of DC's ban on all handguns, and ban on the use of any firearm for self-defense in the home. The brief argued that the Second Amendment is a "collective" right, not an individual one, and asserted that belief in the collective right had been the consistent policy of the U.S. Department of Justice since the FDR administration. A brief filed by some other former DOJ officials (including several Attorneys General, and Stuart Gerson, who was Acting Attorney General until Janet Reno was confirmed)took issue with the Reno-Holder brief's characterization of DOJ's viewpoint.
But at the least, the Reno-Holder brief accurately expressed the position of the Department of Justice when Janet Reno was Attorney General and Eric Holder was Deputy Attorney General. At the oral argument before the Fifth Circuit in United States v. Emerson, the Assistant U.S. Attorney told the panel that the Second Amendment was no barrier to gun confiscation, not even of the confiscation of guns from on-duty National Guardsmen.
As Deputy Attorney General, Holder was a strong supporter of restrictive gun control. He advocated federal licensing of handgun owners, a three day waiting period on handgun sales, rationing handgun sales to no more than one per month, banning possession of handguns and so-called "assault weapons" (cosmetically incorrect guns) by anyone under age of 21, a gun show restriction bill that would have given the federal government the power to shut down all gun shows, national gun registration, and mandatory prison sentences for trivial offenses (e.g., giving your son an heirloom handgun for Christmas, if he were two weeks shy of his 21st birthday). He also promoted the factoid that "Every day that goes by, about 12, 13 more children in this country die from gun violence"--a statistic is true only if one counts 18-year-old gangsters who shoot each other as "children."(Sources: Holder testimony before House Judiciary Committee, Subcommitee on Crime, May 27,1999; Holder Weekly Briefing, May 20, 2000. One of the bills that Holder endorsed is detailed in my 1999 Issue Paper "Unfair and Unconstitutional.")
After 9/11, he penned a Washington Post op-ed, "Keeping Guns Away From Terrorists" arguing that a new law should give "the Bureau of Alcohol, Tobacco and Firearms a record of every firearm sale." He also stated that prospective gun buyers should be checked against the secret "watch lists" compiled by various government entities. (In an Issue Paper on the watch list proposal, I quote a FBI spokesman stating that there is no cause to deny gun ownership to someone simply because she is on the FBI list.)
After the D.C. Circuit Court of Appeals ruled that the D.C. handgun ban and self-defense ban were unconstitutional in 2007, Holder complained that the decision "opens the door to more people having more access to guns and putting guns on the streets."
Holder played a key role in the gunpoint, night-time kidnapping of Elian Gonzalez. The pretext for the paramilitary invasion of the six-year-old's home was that someone in his family might have been licensed to carry a handgun under Florida law. Although a Pulitzer Prize-winning photo showed a federal agent dressed like a soldier and pointing a machine gun at the man who was holding the terrified child, Holder claimed that Gonzalez "was not taken at the point of a gun" and that the federal agents whom Holder had sent to capture Gonzalez had acted "very sensitively." If Mr. Holder believes that breaking down a door with a battering ram, pointing guns at children (not just Elian), and yelling "Get down, get down, we'll shoot" is example of acting "very sensitively," his judgment about the responsible use of firearms is not as acute as would be desirable for a cabinet officer who would be in charge of thousands and thousands of armed federal agents, many of them paramilitary agents with machine guns.
I dunno, as someone who has seen a lot of Muslim women show up to school in burqas, change over to jeans and a blouse before class, and then change back before they are picked up at the end of the day--women aren't 'sposed to drive, don'tcha know--I have a hard time wrapping my head around the concept of "voluntary paternalism." When we see women in many Muslim societies making "choices" women in less restrictive ones go a long way to avoid, are we not adopting dissimilitude should we label those choices "voluntary?"
In a taped speech shown to attendees at a climate change conference in California this week, Barack Obama continued trying to distract Americans from the enormous cost of making substantial reductions in carbon dioxide emissions by promising "five million new green jobs that pay well and can't be outsourced." Not only is this number pulled out of thin air; it's nothing to be happy about. As I've noted, the manpower required to transform the economy so that greenhouse gas emission targets can be reached is a measure of the cost involved. Obama makes it seem as if we should try to maximize this cost, promising that green jobs will "steer our country out of this economic crisis."
That is pretty much the opposite of the truth. As The New York Times notes, "some industry leaders and members of Congress have suggested that Mr. Obama's climate proposal would impose too great a cost on an already-stressed economy—having the same effects as a tax on coal, oil and natural gas—and should await the end of the current downturn." Obama's response is to portray the economic burden as a boon.
In the speech, he does implicitly make the case that the cost he refuses to acknowledge will be justified in the long run:
Few challenges facing America—and the world—are more urgent than combating climate change. The science is beyond dispute and the facts are clear. Sea levels are rising. Coastlines are shrinking. We've seen record drought, spreading famine, and storms that are growing stronger with each passing hurricane season.
Is it really "beyond dispute" that global warming already has produced drought, famine, and stronger storms? New York Times environmental reporter Andrew Revkin notes that "the statement about 'storms that are growing stronger with each passing hurricane season' is hard to square with the science on hurricanes in a warming world, which has gotten more nuanced of late."
Even if Obama were right about current conditions, and right that things will only get worse, what evidence is there that his cap-and-trade plan will ameilorate the trend enough to justify the cost? Assuming we meet his goal of an 80 percent reduction in carbon dioxide emissions by 2050 (a conveniently distant deadline), how much will it cost, what impact will it have on global warming, and how much damage will thereby be avoided?
Bjorn Lomborg, author of The Skeptical Environmentalist and Cool It: The Skeptical Environmentalist's Guide to Global Warming, argues that adapting to climate change is much more cost-effective that trying to prevent it, an effort he says is unlikely to have any measurable impact. Presumably Obama thinks Norberg Lomborg is wrong. I'd like to hear why. But that would require Obama to be more candid about the sacrifices demanded by his plan to create the Clean-Energy Economy of Tomorrow. It is difficult to perform a cost-benefit analysis you refuse to admit there's a cost.
Ron Bailey's interview with Lomborg appeared in the October issue of reason.
Kinda amusing that, after the relentless drumbeat of "conserve," this item is being presented as a potential problem.
Surprise Drop in Power Use Delivers Jolt to Utilities By REBECCA SMITH
An unexpected drop in U.S. electricity consumption has utility companies worried that the trend isn't a byproduct of the economic downturn, and could reflect a permanent shift in consumption that will require sweeping change in their industry.
Numbers are trickling in from several large utilities that show shrinking power use by households and businesses in pockets across the country. Utilities have long counted on sales growth of 1% to 2% annually in the U.S., and they created complex operating and expansion plans to meet the needs of a growing population.
"We're in a period where growth is going to be challenged," says Jim Rogers, chief executive of Duke Energy Corp. in Charlotte, N.C.
The data are early and incomplete, but if the trend persists, it could ripple through companies' earnings and compel major changes in the way utilities run their businesses. Utilities are expected to invest $1.5 trillion to $2 trillion by 2030 to modernize their electric systems and meet future needs, according to an industry-funded study by the Brattle Group. However, if electricity demand is flat or even declining, utilities must either make significant adjustments to their investment plans or run the risk of building too much capacity. That could end up burdening customers and shareholders with needless expenses.
To be sure, electricity use fluctuates with the economy and population trends. But what has executives stumped is that recent shifts appear larger than others seen previously, and they can't easily be explained by weather fluctuations. They have also penetrated the most stable group of consumers -- households.
Dick Kelly, chief executive of Xcel Energy Inc., Minneapolis, says his company, which has utilities in Colorado and Minnesota, saw home-energy use drop 3% in the period from August through September, "the first time in 40 years I've seen a decline in sales" to homes. He doesn't think foreclosures are responsible for the trend.
Duke Energy Corp.'s third-quarter electricity sales were down 5.9% in the Midwest from the year earlier, including a 9% drop among residential customers. At its utilities operating in the Carolinas, sales were down 4.3% for the three-month period ending Sept. 30 from a year earlier.
American Electric Power Co., which owns utilities operating in 11 states, saw total electricity consumption drop 3.3% in the same period from the prior year. Among residential customers, the drop was 7.2%. However, milder weather played a role.
Utility executives question whether the recent declines are primarily a function of the broader economic downturn. If that's the case, says Xcel's Mr. Kelly, then utilities should continue to build power plants, "because when we come out of the recession, demand could pick up sharply" as consumers begin to splurge again on items like big-screen televisions and other gadgets.
Some feel that the drop heralds a broader change for the industry. Mr. Rogers of Duke Energy says that even in places "where prices were flat to declining," his company still saw lower consumption. "Something fundamental is going on," he says.
Michael Morris, the chief executive of AEP, one of the country's largest utilities, says he thinks the industry should to be wary about breaking ground on expensive new projects. "The message is: be cautious about what you build because you may not have the demand" to justify the expense, he says.
Utilities are taking steps to get a better understanding of the cause. Some are asking customers who reduced usage to explain what is influencing them. Xcel and other utilities, for example, have been running environmentally focused campaigns to urge consumers to use less energy recently, a message that might be taking hold.
Power companies are also questioning the reliability of the weather-adjustment models they use to harmonize fluctuating sales from quarter to quarter. "It's more art than science," says Bill Johnson, Chief Executive of Progress Energy Inc., Raleigh, N.C.
If the sector is entering a period of lower demand -- which could accelerate further if the automotive sector collapses -- many utilities will have to change the way they cover their costs.
Utilities are taking a hard look at the way they set rates and generate profits. Many companies are embracing a new rate design based on "decoupling," in which they set prices aimed at covering the basic costs of delivery, with sales above that level being gravy. Regulators have resisted the change in some places, because it typically means that consumers using little energy pay somewhat higher rates.
I Love the Smell of Murderous Thuggery in the Evening
Radley Balko | November 20, 2008, 10:10pm
Legendary high-end Paris candle-maker Cire Trudon introduces "Ernesto," for those of you who want to fill your home with the scent of a sweaty, murdering Latin revolutionary:
In a hotel of Havana, sizzling under the stubborn sun of the Revolution, fierce overtones of leather and tobacco meddle with waxy silence of wood. Breaking out of the cool dimness, sly grimaces emerge, framed by the smoke of cigars and the barrels of guns.
Please note, this scent isn't for the proletariat. This particular block of paraffin infused with the scent of glorious Marxist uprising will set you back a cool $75 for 9.5 ounces. ¡Hasta la Victoria Siempre!
Heck yes, Doug. For my part, I've concluded we could clean up all current fiscal messes by enacting a $5.00 "fair share" tax. Every time some demagogue utters the term "fair share," we charge 'em 5 bucks. Ought to handle the deficit in a couple months, and empty union slush funds, too boot.
This one could be filed a lot of places. The original contains a lot of links to source material:
When Does an Infant Industry Stop Needing Its Taxpayer Allowance?
Federal bioethanol subsidies are 30 years old this month. As reason has documented time after time after time after time, those subsidies to corn ethanol have had deleterious effects on the environment and the price of food around world.
It's way past time for the industry to stand or fall on its own economic merits. Food Before Fuel, a broad coalition of environmental, farming, taxpayer, consumer and other groups, is calling on the feds to drop counterproductive bioethanol subsidies:
“On many issues, these groups gathered here today do not see eye to eye. But we have come together because we all can agree that the government’s subsidization of the corn ethanol industry is a flawed policy that pits rural industries against one another, raises food prices for everyone and has failed to yield promised environmental benefits,” Brandenberger said.
Duane Parde, president of the National Taxpayers Union, was critical of the ethanol industry as a “demonstrative waste of taxpayer money in a time of economic hardship.”
"President-elect Obama and the 111th Congress have an opportunity to protect taxpayers and end business as usual,” Parde said. “We have spent 30 years and billions of taxpayer dollars subsidizing the production of ethanol with little to show for it. Despite the subsidies, ethanol is not competitive in the marketplace and the industry only survives because politicians shovel our money into their pockets. We must end the bailouts and subsidies for industries that are unable or unwilling to stand on their own."
Craig Cox, Midwest vice president of the Environmental Working Group, said that, "After 30 years of subsidies, ethanol is displacing only 3 percent of the gasoline we use each year, is likely increasing rather than decreasing greenhouse gas emissions, and is threatening our soil, water and wildlife. Yet ethanol gets $3 out of every $4 of tax credits the federal government gives to all renewable alternatives including wind, solar and geothermal. It is time we direct our tax dollars to renewable alternatives, including biofuels, based on how well they protect our climate, our environment and our energy security."
Jason Clay, senior vice president for market transformation at the World Wildlife Fund, noted, “In its work with local communities and habitats across the globe, the World Wildlife Fund has seen the negative impacts of the biofuel policy not only on the environment, but on vulnerable populations throughout the world.”
As Competitive Enterprise Institute senior fellow Marlo Lewis notes:
"After 30 years of government coddling, it's time for this infant industry to grow up and succeed or fail on its own merits. If ethanol is commercially viable then no government support is needed; if it is not commercially viable, no amount of government support can make it so."
Scientists Sequence Half the Woolly Mammoth's Genome Study could be a step toward resurrecting a long-extinct animal By Kate Wong
Editor's note: This story will appear in our January issue but is being posted early because of a publication in today's Nature.
Thousands of years after the last woolly mammoth lumbered across the tundra, scientists have sequenced a whopping 50 percent of the beast’s nuclear genome, they report in a new study. Earlier attempts to sequence the DNA of these icons of the Ice Age produced only tiny quantities of code. The new work marks the first time that so much of the genetic material of an extinct creature has been retrieved. Not only has the feat provided insight into the evolutionary history of mammoths, but it is a step toward realizing the science-fiction dream of being able to resurrect a long-gone animal.
Researchers led by Webb Miller and Stephan C. Schuster of Pennsylvania State University extracted the DNA from hair belonging to two Siberian woolly mammoths and ran it through a machine that conducts so-called highthroughput sequencing. Previously, the largest amount of DNA from an extinct species comprised around 13 million base pairs—not even 1 percent of the genome. Now, writing in the November 20 issue of Nature, the team reports having obtained more than three billion base pairs. “It’s a technical breakthrough,” says ancient-DNA expert Hendrik N. Poinar of McMaster University in Ontario.
Interpretation of the sequence is still nascent, but the results have already helped overturn a long-held assumption about the proboscidean past. Received wisdom holds that the woolly mammoth was the last of a line of species in which each one begat the next, with only one species existing at any given time. The nuclear DNA reveals that the two mammoths that yielded the DNA were quite different from each other, and they seem to belong to populations that diverged 1.5 million to two million years ago. This finding confirms the results of a recent study of the relatively short piece of DNA that resides in the cell’s energy-producing organelles—called mitochondrial DNA—which suggested that multiple species of woolly mammoth coexisted. “It looks like there was speciation that we were previously unable to detect” using fossils alone, Ross D. E. MacPhee of the American Museum of Natural History in New York City observes.
Thus far the mammoth genome exists only in bits and pieces: it has not yet been assembled. The researchers are awaiting completion of the genome of the African savanna elephant, a cousin of the woolly mammoth, which will serve as a road map for how to reconstruct the extinct animal’s genome.
Armed with complete genomes for the mammoth and its closest living relative, the Asian elephant, scientists may one day be able to bring the mammoth back from the beyond. “A year ago I would have said this was science fiction,” Schuster remarks. But as a result of this sequencing achievement, he now believes one could theoretically modify the DNA in the egg of an elephant to match that of its furry cousin by artificially introducing the appropriate substitutions to the genetic code. Based on initial comparisons of mammoth and elephant DNA, he estimates that around 400,000 changes would produce an animal that looks a lot like a mammoth; an exact replica would require several million.
(The recent cloning of frozen mice is not applicable to woolly mammoths, Schuster believes, because whereas mice are small and therefore freeze quickly, a mammoth carcass would take many days to ice over—a delay that would likely cause too much DNA degradation for cloning.)
In the nearer term, biologists are hoping to glean insights into such mysteries as how woolly mammoths were adapted to their frigid world and what factors led to their demise. Miller notes that by studying the genomes of multiple mammoths from different time periods, researchers will be able to chart the decrease in genetic diversity as the species died out. The downfall of the mammoths and other species may contain lessons for modern fauna in danger of disappearing, he says.
Indeed, the team is now sequencing DNA they have obtained from a thylacine, an Australian marsupial that went extinct in 1936, possibly as a result of infection. They want to compare its DNA with that of the closely related Tasmanian devil, which is currently under threat from a devastating facial cancer.
“We’re hoping to learn why one species went extinct and the other didn’t and then use that [knowledge] in conservation efforts,” Miller says. If the research turns up genes associated with survival, scientists can use that information to develop a breeding program for the Tasmanian devil that maximizes the genetic diversity of the population—and increases the frequency of genes that confer immunity. Perhaps the greatest promise of ancient DNA is not raising the dead but preserving the living.
Some of the most witless gibberish I've read in a while.
Hollywood ponders global warming
By DERRIK J. LANG, AP Entertainment Writer Wed Nov 19, 8:38 am ET
LOS ANGELES – Hollywood insiders and climate change experts agree that they can't shove messages about global warming down audiences' throats. They met at the Skirball Cultural Center on Tuesday to discuss how storytelling in film and TV can translate broad issues about climate change to everyday audiences. "The storytelling has to trump everything," said "West Wing" actor Bradley Whitford.
During the Population Media Center's Climate Change Summit, Whitford, Bruce Davison and Scott Wolf performed "Shuddering to Think," a one-act play by Jon Robin Baitz about a playwright bemoaning an Earth Day play. Baitz and Lawrence Weschler, director of the New York Institute for the Humanities at New York University, joined the actors by webcam. "I think this play is a good template for how to communicate these types of issues to people," said Wolf, who starred in Fox's "Party of Five." "If we render an audience a school assembly, they shut off. The way that this issue is so beautifully incidental in this story is exactly how to get big giant messages across in such a small way."
Howard Frumkin, director of the National Center for Environmental Health at the Centers for Disease Control and Prevention, agreed that viewers are turned off by accusations and hectoring. He said dispensing incorrect information about climate change can also elicit depression and a "sticking-your-head-in-the-sand" attitude from the public. "One thing we've learned is that apocalyptic stories don't work very well," said Frumkin.
David Rambo, a writer and supervising producer for CBS' "CSI: Crime Scene Investigation," pointed to the eighth-season episode "The Case of the Cross-Dressing Carp," which explores the issue of water treatment contamination, as an example of how an environmental topic can be woven into a compelling story — and not offend advertisers or public officials.
"It is a challenge," said Rambo. "A lot of the industries that we point the finger at when we talk about climate change are the very ones that make our livelihoods possible, but there's so much pressure on the corporations that advertise to be responsible world citizens, at this point, they pretty much make their own case for the things they're doing." Many attendees said the major studios have successfully gone green in recent years. They cited e-mailing scripts and call sheets instead of printing them on paper, employing reusable cups instead of plastic water bottles and using hybrid production vehicles for transportation on set instead of gas guzzlers.