Normative Narratives


Leave a comment

China’s Model of Economic Development Cannot be Exported to Africa

Cartoon: Panda Games (medium) by karlwimer tagged china,olympics,panda,bear,growth,progress,darfur,tibet,pollution,karl,wimer

Original article:

However, with China’s more recent rise, what has emerged instead is the so-called “China model” featuring authoritarian capitalism. China is actively promoting this new model of China’s political and economic development in Africa through political party training programs, which constitute a key component of Chinese foreign policy toward Africa.

China has seen remarkable economic growth in the past few decades. About 3/4 of the global  reduction in extreme poverty since the end of the Cold War can be attributed to China. But as impressive as its experience has been, China’s growth model cannot be exported to Africa.

Why not? China has a strong, stable central government and a huge population. Despite the inevitable levels of corruption resulting from an economy dominated by government investment and a civil society which is subservient to the government (lack of transparency, accountability / judicial independence / checks-and-balances, no freedom of press/assembly), the Communist Party is somewhat uniquely dedicated to investing in the human capital of its people and providing some semblance of a welfare system

These positive features that have fueled China’s growth are generally missing in African countries. African economies tend to be natural resource-based, which do not require investment in people for growth but rather patronage politics to keep ruling regimes in power. As a result, the African continent is dominated by poor governance, corruption, poverty and conflict.

China also happens to be reaching the limits of its government-investment and export fueled economic growth model. Because of the Communist Party’s unwillingness to expand civil liberties, China’s greatest avenue for sustainable growth –it’s people’s innovative potential (really the only avenue for long-term sustainable growth for any country, but especially China due to it’s huge population)–remains underutilized. In short, while China’s model can (in the best case scenario) bring a country from low to middle income, it cannot bridge the gap between middle and high income (and as previously stated, the conditions needed for the Chinese model to bring Africa into middle income-dom simply do not exist).

The Communist Party is facing resistance at home, due to the twin forces of increasing demands for political rights (an inevitable result of advances in communication technologies and globalization) and slowing economic growth. Instead of loosening its grip at home to promote economic growth, the Chinese government is tightening its grip abroad. It is effectively trying to buy more time at the expense of regular African people–this is neo-colonialism.

But isn’t this the same as America’s goal of promoting democracy abroad? Perhaps ostensibly, but not functionally. Democracy is based on the concept of self-determination–of people determining their own future and having a government that carries out that vision. Decades of failures and hard-learned lessons in development reinforce the idea that effective democratic governance is the path to peace, stability, and sustainable growth. This is why the United Nation’s new Sustainable Development Goals (SDGs) are based on accountable, inclusive governance and the protection of human rights–i.e. effective democratic governance.

The Chinese model of political economy, on the other hand, places little to no emphasis on the African people.  It will enrich Africa’s autocratic leaders and Chinese businessmen in the short-run, leaving the host countries with rising inequality, continued extreme poverty, human rights violations, and conflict. 

The only thing the Chinese and American visions for governance and development have in common, aside from being based on capitalism, are that they are visions being offered by outside powers. Other than this, they could not be more different.

China states that the training programs are strictly exchanges of opinions rather than an imposition of the China model on African countries. In other words, China invites African political party cadres to China to study the Chinese way of governance on issues they are interested in, but whether they eventually adopt the Chinese way is purely at their own discretion.

The original article suggests that perhaps China is just offering best practices, take ’em or leave ’em, but other recent actions compound the idea this is part of a larger play. Considering increased military assertiveness by China (South China Sea) and Russia (Crimea, Syria), combined with the economic backing of new Sino-Russo-centric development institutions (the Asian Infrastructure Investment Bank (AIIB) and New Development Bank (NDB)), and China’s sharing of “best practices” (best for China, anyhow) look like the “soft power” component of a larger “hard power” play to actively and aggressively promote its interests.  

Contrast this with likely European (Brexit and other internal EU concerns) and potential American retrenchment (who knows what a Trump presidency could mean for our foreign policy), and an even more concerning picture emerges.

Western backed international organizations, though still the dominant players for now, will face increased competition from organizations (AIIB, NDB) that have lower standards for governance and human rights, potentially compromising what is already a lukewarm embrace of the human rights based approach to development (the IMF, still trying to shake the legacy of failed “Washington Consensus” policies, has embraced more pro-poor, context-sensitive, flexible, ex-ante conditionality; the World Bank, on the other hand, is dragging its feet on mainstreaming human rights into its operations).

Global democratization–which has the benefit of near universal popularity among the civil societies of nations–is facing authoritarian headwinds. Overcoming these authoritarian forces requires strong, principled, long-sighted leadership. Lets hope said leadership is somewhere on the horizon.

 


Leave a comment

Of Brexit and Democracy

In terms of British internal affairs, I find it difficult to take a stance on the outcome of the Brexit vote. Britain may be poorer in the short run, but capital and trade will return to normal as markets self-correct, so I do not foresee a prolonged economic slump. I also do not foresee a further unraveling of the E.U., as there are really no other countries like Britain in the E.U. (or more accurately, if other countries do leave, it will be because of structural issues facing the E.U. that predated Brexit). Of course many will disagree, and much more can be said on either of these claims, but I am glossing over them to get to my main point.

There is nothing to be gained by stomping your feet because the referendum’s outcome is not what you may have liked. In fact, in some ways it was refreshing to see a referendum whose outcome was genuinely up-in-the-air. This is how democracy works–if you wish you could impose a result on this referendum, you are missing the point (or maybe I am–counter-point).

Where I believe Brexit can do its worst long-term damage is not to Britain, or the U.K., or even to the E.U. as a whole. Britain and the E.U. at large are modern, democratic, capitalist countries, and as such will prove resilient. It is the world’s developing regions where Brexit will have its greatest impact. These regions need greater contributions in terms of economic aid, democratic capacity building, and conflict prevention / resolution. In terms of conflict prevention / resolution, even before Brexit the E.U. was already punching below its weight, and Britain was one of the few active European armed forces. I cannot see how Brexit will not compromise European contributions on these important fronts: 

Britain’s decision to quit the European Union could send damaging shockwaves through the bedrock Anglo-American “special relationship,” raising questions about London’s willingness and ability to back U.S.-led efforts in global crises ranging from the Middle East to Ukraine.

The loss of the strongest pro-U.S. voice within the 28-nation bloc, as a result of the “Brexit” referendum, threatens to weaken Washington’s influence in European policymaking and embolden Russian President Vladimir Putin to further challenge the West, analysts and former diplomats say.

Phil Gordon, a former senior foreign policy adviser to Obama, expressed concern that Europe will become inwardly focused on Britain’s departure and independence movements on the continent, leaving the United States to shoulder more of the international burden.

Cameron has cooperated closely with Obama in the security sphere. Britain has been a major military player in U.S.-led campaigns against Islamic State militants in Syria and Iraq, an active ally on the ground in Afghanistan and a strong supporter of sanctions against Russia over its role in Ukraine’s separatist conflict.

While “state-building” may be a fools errand, failing to nurture budding democratic movements, particularly in authoritarian countries, risks losing genuine opportunities for development, the slaughter of innocent people, and the setback of these movements for decades.

The global march towards democratization has naturally slowed down post Cold-War as the “low hanging fruit” of democratization realized their democratic aspirations. But with Brexit (coupled with an increasingly assertive Russia and China), the inevitability of eventual global democratization for the first time comes into question.     

The U.S. has more than carried its share of the load in promoting a democratic international order as Europe built itself back up from the ashes of WWII and further modernized following the Cold War. Now, when domestic considerations are forcing the U.S. to at very least not increase its role in the world, Brexit has compromised the capacity of the only partner that could realistically pick up some of the slack.

Perhaps a pan-European army was never going to be a reality, but Brexit likely made it harder to coordinate the build-ups of individual European armed forces in a synergistic way. 

Britain is a valued member of NATO, but if it is weakened economically by its decision to leave the European Union, its leaders might come under public pressure to pare back military spending — even as the United States is pressuring NATO members to spend more on defense.

The European Union often frustrates American presidents, yet the disintegration of the bloc would be a geopolitical disaster for Washington. Even before Britain’s exit, Germany was Europe’s dominant power, andChancellor Angela Merkel was Europe’s dominant leader.

“Britain leaving the E.U. now poses a challenge for Germany,” said Nicholas Burns, a former top American diplomat who now teaches at the Harvard Kennedy School. “It will need to provide even greater leadership to keep Europe united and moving forward.”

What the Brits decide to do within their own country is their own decision. However, the role Britain plays in international affairs has massive global implications. Hopefully Britain’s new leadership understands this, and acts accordingly. 


Leave a comment

Economic Outlook: Getting the International Aid Fiscal House In Order

humanitarian spedning

Mr. Lykketoft [UN General Assembly President], echoed former UN Secretary General Kofi Annan, who said, ‘there can be no peace without development, no development without peace and neither without human rights.’

As the UN marks its 70th anniversary, the Organization itself is “very much at a cross roads, particularly in the area of peace and security” with the architecture developed over the seven decades now struggling to keep pace with today’s and tomorrow’s threats and geopolitical tensions, in a way that is undermining Member State trust.

…making the UN Security Council (UNSC) more representative and more effective, for example, by addressing the use of the veto in situations involving mass atrocity crimes. But it also includes agreeing budgetary and institutional reforms to prioritize political solutions and prevention across every aspect of the UN’s approach to sustaining peace.

Humanitarian spending generally goes towards natural disaster relief, aiding people in conflict zones, and helping people displaced by both types of crises. As international powers have proven themselves unable to end conflicts, and the negative affects of climate change have become more acute, humanitarian spending has (unsurprisingly) ballooned in recent years.

Investing in clean energy and environmental resilience (preparedness and early warning systems) can mitigate the damage caused by natural disasters, reducing future environmentally-related humanitarian spending. But natural disasters, while tragic, are partly unavoidable–as various sayings go, mankind cannot “beat” nature.

(Do not confuse the inevitability of natural disasters with climate pessimism–the idea that it is too late to fully prevent the negative aspects of climate change, so why even try? There is nothing inevitable about the current trajectory of global climate change, and there are many actions humans can take to make the global economy more environmentally sustainable).

Thankfully, natural disasters generally do not cause long-term drains on aid budgets. While devastating, the natural disaster passes and the affected area can begin to rebuild. Conflicts, however, can be persistent. Persistent conflicts require humanitarian aid year after year, diverting resources from development aid that doubles as conflict prevention.

[The number of] people in extreme poverty who are vulnerable to crisis–677 million. Efforts to end poverty remain closely related to crisis, with 76% of those in extreme poverty living in countries that are either environmentally vulnerable, politically fragile, or both.

There is something particularly discouraging–no damning–about the fact that man-made, preventable, politically solvable issues should divert such a large amount of resources–80 percent of humanitarian funding–that could otherwise be used to deal with unavoidable humanitarian crises (natural disasters) and investing in sustainable human development / conflict prevention.

There will always be strains on development and humanitarian budgets. Donor countries are asked to give resources while dealing with budgetary constraints at home. All the more reason that, if an issue is both preventable and leads to persistent future costs, it should be addressed at its root.

To this end, the international community needs to invest more into poverty reduction and capacity building for democratic governance, particularly in Least Developed Countries (LDCs) most susceptible to conflict. When conflict prevention fails, there needs to be a military deterrent (more defense spending by the E.U. and Germany specifically) and UN Security Council reform, so that conflicts can be “nipped in the bud” and not deteriorate to the point where they become a persistent source of human suffering and drain on international aid budgets.

The idea of increasing investment in preventative peacebuilding and sustainable development as means of reducing future humanitarian spending has gained steam within the United Nations system–this is good news. But failure to end emerging conflicts before they become persistent conflicts puts this plan at risk, because these persistent conflicts consume the very resources needed to make the aforementioned investments in the first place. 

The U.S. could effectively lead the fight for UNSC reform. As the world’s largest military and a veto-wielding permanent member of the Security Council, on the surface the U.S. would have the most to lose by introducing a way to circumvent a UNSC veto (perhaps through a 2/3 or 3/4 UN General Assembly vote). But while the world has become increasingly democratic, the UNSC has has not. This reality has prevented the U.N. from implementing what it knows are best practices. By relaxing its grip on power through UNSC reform, the U.S. would be able to better promote democracy and human rights abroad.


Leave a comment

Conflict Watch: RIP R2P, International Humanitarian Law

 

Original article:

Warplanes level a hospital in the rebel-held half of Aleppo, Syria, killing one of the city’s last pediatricians. A Saudi-led military coalition bombs a hospital in Yemen. In Afghanistan, American aircraft pummel a hospital mistaken for a Taliban redoubt.

The rules of war, enshrined for decades, require hospitals to be treated as sanctuaries from war — and for health workers to be left alone to do their jobs.

But on today’s battlefields, attacks on hospitals and ambulances, surgeons, nurses and midwives have become common, punctuating what aid workers and United Nations officials describe as a new low in the savagery of war.

On Tuesday [5/3], the Security Council unanimously adopted a resolution to remind warring parties everywhere of the rules, demanding protection for those who provide health care and accountability for violators. The measure urged member states to conduct independent investigations and prosecute those found responsible for violations “in accordance with domestic and international law.”

But the resolution also raised an awkward question: Can the world’s most powerful countries be expected to enforce the rules when they and their allies are accused of flouting them?

The failure to uphold decades-old international humanitarian law stems from the failure to uphold a more recently established principle–the Responsibility to Protect (R2P)–which states:

Sovereignty no longer exclusively protects States from foreign interference; it is a charge of responsibility where States are accountable for the welfare of their people.

  1. The State carries the primary responsibility for protecting populations from genocide, war crimes, crimes against humanity and ethnic cleansing, and their incitement;
  2. The international community has a responsibility to encourage and assist States in fulfilling this responsibility;
  3. The international community has a responsibility to use appropriate diplomatic, humanitarian and other means to protect populations from these crimes. If a State is manifestly failing to protect its populations, the international community must be prepared to take collective action to protect populations, in accordance with the Charter of the United Nations.

To be fair, the rise of non-state actors (terrorists) in conflict has made it harder to uphold humanitarian law–these parties do not play by the rules. But typically poor governance is a cause of terrorism, not a result of it. Regardless, the R2P is focused on the role of the state; if the R2P should be invoked when a state fails to protect its population from war crimes, how then can it not be invoked when the state is the primary perpetrator of such crimes?

Failure to uphold the R2P has enabled the current hurting stalemate in Syria, so rife with violations of international humanitarian law that we no longer bat an eye when a story comes across our news feed. You may be asking what exactly is International Humanitarian Law? What is human rights law? There is a lot of overlap, so a quick crash course:

International humanitarian law is also known as the law of war or the law of armed conflict.

It is important to differentiate between international humanitarian law and human rights law. While some of their rules are similar, these two bodies of law have developed separately and are contained in different treaties. In particular, human rights law – unlike international humanitarian law – applies in peacetime, and many of its provisions may be suspended during an armed conflict.

International humanitarian law protects those who do not take part in the fighting, such as civilians and medical and religious military personnel.

Essentially, international humanitarian law exists to protect certain human rights of non-aggressors in conflict zones. Human rights are broader (economic / social, political / civil, cultural), and are also applicable during times of peace. Upholding human rights obligations is the key to preventing conflict (positive peace), upholding humanitarian law is meant to protect people’s rights when prevention fails.

It is not my contention that, absent the R2P, we would not see such blatant violations of international humanitarian law. The R2P was crafted in response to the realities of modern warfare, which is dominated by protracted social conflicts (as opposed to the interstate wars of old). The R2P is a positive, an innovation in international governance, but it has proven itself toothless. When the international community fails to adequately respond to the greatest violations of the R2P (when states themselves are the perpetrators of war crimes and violate international humanitarian law), it enables new conflicts to emerge and existing ones to fester by signaling that at the end of the day, when there are no other options but the use force, state sovereignty still trumps human rights. The R2P was just the naming of the beast–you still have to slay it.    

Early detection of human rights violations through the U.N.’s Human Rights Upfront (HRuF) initiative and a greater focus on preventative peacebuiding are important advancements in international governance.  But when a ruler is willing to plunge his country into civil war to hang onto his rule, the R2P must be there to counter him. The R2P should be the mechanism through which we alter the war calculus of such tyrants. Without this deterrent, the effectiveness of HRuF and preventative peacebuilding initiatives are severely curtailed.

The playbook for tyrannical rulers to resist democratic movements has been laid out by Assad–plunge your country into civil war, wait for terrorists to fill the power void of your failed state, and position yourself as the only actor who can fight the terrorists. 

Then, when the international community calls for a political transition to end the fighting, the very parties that went to war to resist the will of the people (In this case Russia, Iran, and Assad himself)–parties with zero democratic credentials themselves–have the gall to invoke the idea of self-determination / respecting the will of the people.

This perversion of the concept of self-determination is particularly infuriating, given the incredible damage caused by an initial unwillingness to even engage the peoples democratic aspirations with dialogue instead of violence. Even if such calls did represent a legitimate pivot towards democratic values (which they most certainly do not), of course no meaningful election could ever take place in a war-zone. 

Combined with current external realities–budget strained and war weary democracies are (for various reasons) not as committed in the fight for democracy as authoritarian regimes are against it–a tyrant will more often than not be able stay in power, at a huge cost to the people, the country, and the region.

This message–that the purported global champions of democracy and human rights cannot be counted on to support you (while the governments you oppose, which have the military advantage to begin with, will get significant external help)–is the only thing that can stem the tide of global democratizationThis cannot be the message (that through our actions) the U.S and E.U. sends to people with democratic aspirations. Democratization is the only path towards modernization and sustainable development–it is truly “the worst form of government, except for all the others” as Winston Churchill famously stated.

Which is why I call for more military spending by wealthier democracies (and more evenly distributed, America should cut back) and U.N.Security Council reform. Acting preventatively is always the best option, when it is still an option. But when prevention fails, we cannot simply throw our hands up an say “oh well, prevention is not an option, guess there is nothing we can do.” In the face of slaughter, words ring hollow and inaction carries a cost as well.


Leave a comment

Economic Outlook: Guaranteed Income vs. Guaranteed Employment

Dr. Martin Luther King Jr., a man whose understanding of social justice was unrivaled, knew the importance of gainful employment in achieving his goals. In his day, Dr. King advocated for (among other things) good jobs for African Americans who had been systematically discriminated against for centuries. This was largely something the private sector could provide, if racial discrimination was sufficiently deterred.

Today, it is not an individual race that faces barriers to gainful employment, but a whole socioeconomic lower class. With corporate profits at an all time high, and interest rates at historic lows, the past few years would have been the perfect time for corporations to ramp up hiring. However, due to forces such as globalization and automation, it appears the private sector alone will not provide the number of well-paying jobs American’s need–it simply does not have to in order to maximize profits (at least in the short-run).

A recent Brookings blog advocated for guaranteed income (i.e. welfare) in the face this reality:

The labor market continues to work pretty well as an economic institution, matching labor to capital, for production. But it is no longer working so well as a social institution for distribution. Structural changes in the economy, in particular skills-based technological change, mean that the wages of less-productive workers are dropping. At the same time, the share of national income going to labor rather than capital is dropping.

This decoupling of the economic and social functions of the labor market poses a stark policy challenge. Well-intentioned attempts to improve the social performance of the labor market – through higher minimum wages, profit-sharing schemes, training and education – may not be enough; a series of sticking leaky band-aids over a growing gaping wound.

As Michael Howard, coordinator of the U.S. Basic Income Guarantee Network, told Newsweek magazine: “We may find ourselves going into the future with fewer jobs for everybody. So as a society, we need to think about partially decoupling income from employment.

…the answer for American families is an old idea whose time has come—a universal basic income.

While an interesting idea, I think having the government act as an “employer of last resort” is a better way of achieving the goals of “universal basic income”, in a way that would be more politically viable. Aside from the economic benefits of employment, there are numerous social benefits as well, including: less crime, improved self-esteem / mental health, and experience / skill building (making people more desirable to private sector employers).

Government jobs could work in many sectors, at lower average wages (so people look for private sector work first), but with more of a training component to promote eventual private sector employment.

Below are a few potential areas for government jobs–areas that are severely under-invested in, and have strong positive “externalities“:

Infrastructure:

The most often cited example when discussing greater government employment is infrastructure. America’s roads and bridges are largely neglected, costing billions a year in lost economic output and putting people’s safety at risk.

Community Development: 

New evidence suggests that where a person grows up has a significant impact on their chances of being successful later in life. Those who grow up in poorer areas find it much harder to “get out” and live productive lives. This is, of course, a huge hindrance to social mobility.

Community development initiatives include mentoring programs (which can mitigate the effects of bad parenting), and “after-school activity” type programs (which can steer young people towards constructive hobbies which often become the basis of employable skills, and away from destructive behavior). Community centers could also offer affordable / free daycare services for younger children.

Parent(s) determine both “who” raises a child, and “where” (since adults make the choice of where they raise their kids)–winning or losing the “parenting lottery” should not be such a strong determinant of future success. While it is impossible to separate the genetic link between parents and their child (the “nature” side of human development), the “who” and “where” (“nurture” side of human development) can be impacted by investing in community development.

Mental Healthcare:

The ACA ensures mental health parity, but not everyone gets the help they need.  To close this gap, government work could increase the “supply” of mental healthcare workers. What I propose is a Mental Health Corp, featuring a new job type–something akin to nurse practitioners taking on more of a doctor’s duties to reduce healthcare costs–in the mental healthcare field.

One does not need a PhD or MD to provide meaningful help to someone struggling with mental illness. There will always be demand for the best trained mental health professionals from people with the means to afford their services, but for those who cannot, surely some care–even if it is not “the best”–would be greatly beneficial. Such care could help people overcome issues that make them unable to find/hold a job and/or lead to criminal activity. 

Feel free to disagree with me on any of the fields mentioned above. The point I am trying to make is that government employment need not be “digging holes to fill them back up again”.

Robust analyses are needed to compare the costs of our current welfare and criminal justice systems versus the cost of a guaranteed employment program. Not all criminal justice or welfare costs would be eliminated with guaranteed employment (criminal justice reform and a livable minimum wage are also needed) but a significant portion would. It is possible a guaranteed jobs program would not cost much more than what we currently pay to combat the symptoms of unemployment, with much greater benefits. 

While on the topic of welfare, guaranteed employment would remedy one of the major holes in the otherwise sound work-for-welfare requirement of the 1996 welfare reform act. After this reform, those unable to find a job also found themselves without a safety-net, falling into “extreme poverty” (which has more than doubled since the reforms were passed)There is a common saying that a nation should be judged not by how well off its wealthiest are, but by how well off its poorest are–with guaranteed employment for those who want it, America would be doing much better on this count. It is past time to plug this obvious hole in welfare reform.

While no one would get rich from government employment, they would be able to live a comfortable life and provide the resources needed for their children to realize their full potential, fulfilling the promises of equality of opportunity and social mobility that America is built upon.


2 Comments

Conflict Watch: Current Strategy Can Degrade But Cannot Defeat The Islamic State

Defeating ISIS means Western boots on the ground

It is commonly accepted that the fight against the Islamic State (IS) is not solely a military fight.  When the U.S. led coalition outlined its plan for combating the group, three main fronts emerged:

  1. Social Media
  2. Financial
  3. Traditional Warfare

Let’s examine how we are doing on each of these fronts, before considering the larger goal of defeating the IS:

Social Media

It is notoriously difficult to police social media sites. Creating an account is free and monitoring content costs money. When an account is shut down, another one pops-up.

The IS has proven itself adept at using social media as both a recruitment tool and as a platform to amplify its message of terror. Good production quality has had the effect of making the group seem more permanent.

Social media sites, understanding the importance of countering the IS message, are stepping up to the plate (perhaps due to the fact that their own infrastructure is being exploited by these groups). One weak spot until recently was Twitter, but a new report shows the company has started to make a stronger effort:

The Islamic State’s English-language reach on Twitter has stalled in recent months amid a stepped-up crackdown against the extremist group’s army of digital proselytizers, who have long relied on the site to recruit and radicalize new adherents, according to a study being released on Thursday.

Twitter Inc (TWTR.N) has long been criticized by government officials for its relatively lax approach to policing content, even as other Silicon Valley companies like Facebook Inc (FB.O) began to more actively police their platforms.

Under intensified pressure from the White House, presidential candidates and some civil society groups, Twitter announced earlier this month it had shut down more than 125,000 terrorism-related accounts since the middle of 2015, most of them linked to the Islamic State group.

In a blog post, the company said that while it only takes down accounts reported by other users it had increased the size of teams monitoring and responding to reports and has decreased its response time “significantly.”

It does not appear social media will become less popular anytime soon. As long as it is a platform that billions of people use, extremist groups will try to use it to further their causes (especially given the success the IS has had).

Therefore, it is the responsibility of social media companies to do everything they can to fight this misuse–it should be a liability issue, a cost of doing business for a very profitable industry.

Financial

Fighting a war and running a “state” are not cheap–the IS has to at least appear to offer some social services and run certain institutions if it wants to claim it is a “state”.

The IS primary revenue streams are selling oil, taxing the people in areas it subjugates, seizing money from banks in those areas, and (to a lesser extent) other illicit activities (selling stolen antiques, ransoming hostages, drug trade, etc).

Recent drops in oil prices and sanctions have helped squeeze the IS finances. But we cannot and are not relying solely on market forces to disrupt the group’s revenue streams:

Air strikes have reduced Islamic State’s ability to extract, refine and transport oil, a major source of revenue that is already suffering from the fall in world prices. Since October the coalition says it has destroyed at least 10 “cash collection points” estimated to contain hundreds of millions of dollars.

U.S. military officials say reports of Islamic State cutting fighters’ wages by up to half are proof that the coalition is putting pressure on the group.

In January, the coalition said air strikes against Islamic State oil facilities had cut the group’s oil revenues by about 30 percent since October, when U.S. defense officials estimate the group was earning about $47 million per month.

[U.S. Army Colonel Steve] Warren said air strikes against Islamic State’s financial infrastructure were “body blows like a shot to the gut”.

“(It) may not knock you out today but over time begins to weaken your knees and cause you to not be able to function the way you’d like to,” he told reporters last week.

It is true there is a limit to what airstrikes can accomplish against the IS without more soldiers on the ground. But airstrikes can be very effective in disrupting oil production and blowing up known cash storage sites. This is an area where the U.S. could expand its efforts more or less unilaterally.

One way to do this could be reconsidering what an acceptable target is. The U.S. led coalition has made an effort to avoid striking areas with expensive infrastructure, in hopes it can be used if wrestled back from the IS. But, as Ramadi has proven, the IS will rig any areas it loses with explosives before it leaves, so perhaps we should rethink trying to spare infrastructure if it means we can make a more significant dent in the IS finances.

What we cannot do is disregard civilian casualties–“carpet bombing” IS held areas is not a viable option. Not only would such a strategy be morally reprehensible, but it would be counter-productive, reinforcing the IS anti-Western message.

Traditional Warfare

In recent months, the IS has lost significant territory in Iraq and Syria. Unfortunately, the groups practice of rigging areas it loses with explosives makes it very difficult to turn liberated areas back to “normal” (safe for displaced people to return and lead productive lives).

Furthermore, these gains have not always been made in “sustainable” ways. In Syria, the Assad regime has gained much of the territory the IS has lost (although the Kurds, natural allies to the West, have also gained territory). In Iraq, a Shiite dominated government has made advances with the aid of Iranian fighters, risking further alienating Iraq’s Sunni population (which paved the way for the rise of the IS in the first place).

Further curbing the benefit of IS loses in Iraq and Syria is the group’s expansion into Libya, where it has an estimated 6,000 fighters and rising, exploiting the post-Qaddafi power vacuum. The U.S. led coalition has started an aerial campaign against the IS in Libya, but absent a unified Libyan government, it will be difficult to stop the groups expansion.

In Libya’s incredibly important neighbor Tunisia, the freedoms associated the country’s successful democratic transition have created more space for the IS to operate. Ultimately effective pluralistic democratic governance, which respects the human rights of all people, is the only way to defeat the IS. We must provide Tunisia with all the support it needs, to ensure that democratization does not become a tool the IS uses to its advantage in the short-run. 

Degrading AND Defeating the Islamic State

The good news is we have made progress on each of the three main fronts in the fight against IS (Social Media, Financial, Traditional Warfare). The bad news is that while we are able to degrade the IS, we have done so in a way that ignores the underlying factors that led to the groups rise in the first place.

Let’s not downplay the very real benefits of degrading the IS. It limits the groups ability to spread misery and death. It compromises the groups ability to carry out attacks abroad, and reduces the likelihood it will inspire lone-wolf attackers.

But the fight against the IS is expensive, and the longer the group is allowed to operate, the more it’s assertion that it is a “caliphate” becomes the fact on the ground. Moreover, time gives the IS (which has proven itself quite tactical and resilient) room to metastasize and evolve. Imagine if the group connected its Middle Eastern territory with large swaths of Northern Africa, transforming its ideological link to Boko Haram into an actual military alliance? This may seem like an unlikely scenario, but everything the IS has done up until this point has defied the odds against it. 

To avoid perpetual war we must degrade the IS in a way that also attacks the groups underlying message–that there is no viable alternative for Muslims. On this front, much work remains. Governments in Islamic countries should put aside sectarian divides and treat the fight against the IS as the fight for the soul of Islam that it is. Unfortunately, there is little to suggest this will happen anytime soon, a point recently made by political comedian Bill Maher:

“Why don’t they fight their own battles? Why are Muslim armies so useless against ISIS? ISIS isn’t 10 feet tall. There are 20,000 or 30,000 of them. The countries surrounding ISIS have armies totaling 5 million people. So why do we have to be the ones leading the fight? Or be in the fight at all?”

If you consider the countries bordering Iraq and Syria — Iran (with 563,000 armed forces personnel), Jordan (115,500), Kuwait (22,600), Lebanon (80,000), Saudi Arabia (251,500) and Turkey (612,800) — you get a total of 1.6 million.

Add in Iraq (177,600) and Syria (178,000) themselves and that brings the total to 2 million. That’s less than half of Maher’s figure.

When we heard back from Maher’s spokesman, he said the comedian was also including the armies of Bahrain, Egypt, Oman, Qatar and the United Arab Emirates.

If they (reservists) are included as part of a country’s army, the total for those 13 countries Maher wants to include rises to 4.95 million, as Maher said.

If you don’t include the reservists, the number of troops in the countries cited by the comedian only rises to 3.6 million.

Looking at the largest Muslim players, there is little hope in sight. Turkey is more interested in fighting the Kurds–one of the strongest forces against the IS–than the IS itself. Saudi Arabia and Iran are wrapped up in proxy wars in Syria and Yemen, and are ideologically opposed to pluralism, democracy, and one another. Egypt under Sisi has become increasingly authoritarian, and as a result finds itself consumed by its own terrorist insurgency. Iraq, as mentioned earlier, is relying too heavily on Iranian forces. In Syria, Assad is hoping that with Russian and Iranian support he can knock out all opposition except the IS, completing his “fighting terrorism” narrative and cementing himself in power as he kills indiscriminately. Jordan seems like a true ally in this fight, but it itself is a monarchy that will not fight for democratic values, and even if it would it cannot be expected to take on this fight alone.

It often seems that the IS is everyone’s second biggest concern. The inability to rally a meaningful Pan-Arabic counter-insurgency against the IS is not ideal (and is actually quite sad), but it is a reality we must acknowledge if we are to put together a coalition that CAN end the group’s reign of terror.

To this end, we need more support from those who do share our values. America cannot be the World’s Police, but the world does need a “police force”. Every country that believes in and has benefited from democratic governance and human rights has a role to play. A global coalition (including ground troops) must include all these parties, and be proportionately funded and manned (meaning the U.S. will still have to play a major leadership role).

To some, such a coalition may seem even less likely than a meaningful Pan-Arabic counter-insurgency. But in my mind, corralling support from interdependent allies that share common values and coordinating financing to fairly and sustainably spreads the cost is more achievable than completely changing the behavior of historically adversarial actors.

We need this global coalition not just to defeat the IS, but to prevent the next Syrian Civil War. Global security is at a crossroads and must evolve–prevention is the cheapest way to maintain a peaceful international order. Having an effective deterrent, alongside promoting democracy and human rights, are indispensable elements of preventing conflict.

Global security is a global public good, absent visionary leadership it will be under-invested in, to the detriment of all.


Leave a comment

Economic Outlook: Why to NOT Raise The Social Security Retirement Age–Labor Force Participation and the “Life-Cycle of Employment”

Original article, courtesy of Politifact:

During the Republican presidential debate in North Charleston, S.C., Sen. Ted Cruz, R-Texas, took aim at the nation’s economic record under President Barack Obama.

“The millionaires and billionaires are doing great under Obama,” Cruz said. “But we have the lowest percentage of Americans working today of any year since 1977. Median wages have stagnated. And the Obama-Clinton economy has left behind the working men and women of this country.”

Cruz is on to something. One key employment statistic known as the civilian labor force participation rate is at its lowest level since the 1970s. This statistic takes the number of Americans in the labor force — basically, those who are either employed or who are seeking employment and divides it by the total civilian population.

Here’s a chart going back to the mid 1970s.

When the civilian labor force participation rate is low, it’s a concern, because it means there are fewer working Americans to support non-working Americans.

…a notable factor in the decline of the labor-force participation rate is the aging of the Baby Boom generation. As more adults begin moving into retirement age, the percentage of Americans who work is bound to decline.

…there’s another way to read Cruz’s words. He said “the lowest percentage of Americans working” since 1977, which could also refer to a different statistic, the employment-population ratio. This statistic takes the number of people who are employed and divides it by the civilian population age 16 and above.

The difference in this case is that using the employment-population ratio, Cruz’s statement is incorrect. Unlike the labor-force participation rate, the employment-population ratio has actually been improving in recent years, although it’s below its pre-recession highs.

Here’s a chart showing this statistic over the same time frame:

If you exclude the Great Recession, the employment-population ratio was last at its current rate in 1984, not 1977.  So by that measurement, he’s close.

(Note: this blog will not meaningfully address the other major labor market issue raised by Senator Cruz–stagnant wages. Nor will it discuss strategies to alter America’s aging demographic. The primary focus is labor force participation by different age groups, the relationship between older workers staying in the workforce longer and youth unemployment, and how those issues are related to America’s Social Security system.

It is not my intention to spark inter-generational warfare, but rather to point out that a “fix” commonly floated to bring America’s fiscal house in order–raising the Social Security eligibility and retirement ages–could have significant unintended negative consequences).

A declining labor force participation rate is worrisome. Even the more positive statistic (employment-population ratio) is cause for concern.

But as important as what is happening, is why it is happening. Failure to accurately answer this question risks the wrong policy response, which would at best fail to solve the problem and at worst further exacerbate it. The conservative camp would undoubtedly focus on the welfare state and disincentives to work. The liberal camp would probably focus on economic inequality and the resulting lack of opportunity facing many poor, mostly minority youths.

I am not interested in getting into a partisan debate, although my regular readers know which side I generally fall on. What neither side is likely to consider (because it does not fit neatly into either economic narrative) is in what age ranges most of the employment to population ratio change has taken place. To shed some light on this, lets look at a recent analysis done by the Bureau of Labor Statistics (The BLS numbers use the 16 and older employment-population ratio definition. In the interest of full disclosure, I work for the BLS, but not in any employment statistics capacity. Furthermore, the views expressed in this blog are my own, and are not the views of the BLS).

Group Participation rate Percentage-point change
1994 2004 2014   1994–2004 2004–14  
Total, 16 years and older 66.6 66.0 62.9 -0.6 -3.1
16 to 24 66.4 61.1 55.0 5.3 -6.1
16 to 19 52.7 43.9 34.0 -8.8 -9.9
20 to 24 77.0 75.0 70.8 -2.0 -4.2
25 to 54 83.4 82.8 80.9 -0.6 -1.9
25 to 34 83.2 82.7 81.2 -0.5 -1.5
35 to 44 84.8 83.6 82.2 -1.2 -1.4
45 to 54 81.7 81.8 79.6 0.1 -2.2
55 and older 30.1 36.2 40.0 6.1 3.8
55 to 64 56.8 62.3 64.1 5.5 1.8
55 to 59 67.7 71.1 71.4 3.4 0.3
60 to 64 44.9 50.9 55.8 6.0 4.9
60 to 61 54.5 59.2 63.4 4.7 4.2
62 to 64 38.7 44.4 50.2 5.7 5.8
65 and older 12.4 14.4 18.6 2.0 4.2
65 to 74 17.2 21.9 26.2 4.7 4.3
65 to 69 21.9 27.7 31.6 5.8 3.9
70 to 74 11.8 15.3 18.9 3.5 3.6
75 to 79 6.6 8.8 11.3 2.2 2.5
75 and older 5.4 6.1 8.0 0.7 1.9
Age of baby boomers 30 to

48

40 to

58

50 to

68

The change in labor force participation seems to have been driven primarily by:

  1. Fewer younger people working
  2. More elderly people working

In fact, the decline of prime working age labor force participation (say 25-55) over the last 20 years has been quite small.

It is true that once you get to the older age brackets (especially 60+), the group represents a smaller percentage of the overall population (see Table 1), so you cannot compare different groups percent changes directly. But even factoring in percentage of the total population, increases in elderly workers have had a significant impact on overall employment. As America’s population continues to get older, it will have an even greater impact:

age distribution over time

One could argue that older and younger people generally do not occupy the same job. Sometimes this is true, sometimes it is not. Furthermore, any given firm could have an older person making a lot of money in a position they intend to fill with more than one entry level worker. This is all anecdotal–without doing more research the exact relationship between older and younger workers and job openings is unknown–but surely there is some relationship (probably one that varies greatly by industry).

Next time a politician talks about raising the Social Security eligibility and retirement ages, consider:

  1. Poorer people (who rely on Social Security the most) are not living longer.
  2. Keeping people working longer means less jobs available to younger people. This also contributes to the exploding student loan debt problem in America (even those who do graduate college have a difficult time getting good paying jobs, at least partially because of competition from older, more qualified workers).

Both of these related issues–youth un(der)employment and student loan debt–create a drag on the economy, as younger people delay starting their “adult lives” (starting families, buying homes, etc.). This drag on the economic growth leads to–you guessed it–less job creation. Based on the BLS numbers, we clearly need to make youth employment a greater priority, as ignoring the problem compromises both current and future economic growth.

When we consider raising the Social Security eligibility age, we must consider unintended consequences. To responsibly increase the eligibility age, the government would have to launch a youth employment program. This could offset most (if not all) of the savings associated with raising the retirement age. Perhaps instead of raising the eligibility age, we should consider making social security a needs-based program, eliminating the cap on taxable income, or both. This may not be “fair” to people who have paid the most into the program (or those who have been more financially conservative throughout their lives), but it would make the Social Security system more financially sustainable, without the unintended negative consequences.

America does not  have to enact policies that exacerbate youth unemployment and/or discomfort poorer elderly people in order to save a few bucks. Our strong financial system and global faith in America’s creditworthiness ensures we can continue to finance important programs (for people of all ages) with long term economic implications. But this global faith in America’s creditworthiness is predicated on the belief that we can correctly identify and address our structural economic problems (and thus continue to grow and repay our debts). To preserve this faith, we must work across the partisan divide to responsibly and sustainably address these problems, not recycle stale partisan arguments that are largely unrelated to the problems at hand.


Leave a comment

Economic Outlook: The Politics of Division and Class-Based Affirmative Action

During a March 2008 campaign speech at the National Constitution Center in Philadelphia, Barack Obama said:

Most working- and middle-class white Americans don’t feel that they have been particularly privileged by their race. Their experience is the immigrant experience – as far as they’re concerned, no one’s handed them anything, they’ve built it from scratch. They’ve worked hard all their lives, many times only to see their jobs shipped overseas or their pension dumped after a lifetime of labor. They are anxious about their futures, and feel their dreams slipping away; in an era of stagnant wages and global competition, opportunity comes to be seen as a zero sum game, in which your dreams come at my expense.

Obama then noted the consequences:

When they hear that an African-American is getting an advantage in landing a good job or a spot in a good college because of an injustice that they themselves never committed…resentment builds over time.

Obama’s words ring just as true today, as highlighted by the Public Religion Research Institute’s 2015 American Values Survey. While the majority of Americans believe that historically marginalized groups face “a lot of discrimination”, there is a large portion of all Americans (25%) who believe whites face “a lot of discrimination”. Predictably, certain groups (Republican, Tea Party) hold these views even more strongly.

These perceptions fuel the “politics of division”–the “us” versus “them” mentality where “we” are hard workers who bust our butts just to make ends meet, while “they” are lazy “takers”. Of course at the macro level there are examples of “us” and “them” in every race / culture. But the diffusion of news sources and social media “echo chambers” counter this obvious truth, reinforcing the politics of division.

The politics of division always comes to the forefront during Presidential campaigns, when voters want to know what a candidate will do for “us”, and how they will punish the “them”. In the current campaign season, national security and immigration concerns have made the politics of division even more acute.

These are not just philosophical considerations, the politics of division has real world implications. By turning issues that affect people of all races (globalization, stagnant wages, social immobility, etc.) into racial ones, people end up voting against their economic interests in the name of cultural / lifestyle considerations. When their economic situation consequentially continues to deteriorate, they double down on their scapegoating, moving further away from the real answers to their problems. And if those 25% of people (who believe that whites face a lot of discrimination) happen to come out and vote in droves, then something that seems as absurd as “President Trump” could come to fruition.

So how do we counter the politics of division? While no one government program can fix race relations, one obvious program to reconsider–the one that Obama alluded to in 2008 and is before the SCOTUS todayis the current structure of affirmative action systems. I believe an economic class-based system would not only help bridge racial divides, but would also more effectively promote opportunity and social mobility.

Before you deride me for being at best misguided and at worst a racist, consider the following arguments:

Low Hanging Fruit:

A common argument against the current affirmative action model is that colleges pursue “low-hanging fruit”–minority students from good backgrounds who would have gone to college anyways. 

The number of minority students accepted through a class-based affirmative action system would be lower compared to the current model (although, due to racial inequalities in income and wealth, a class based system would still disproportionately benefit minorities). However, it is possible that a class based system could actually lead to greater total enrollment by minority groups, by targeting those who otherwise would not go to college. 

More research and testing would be needed to determine how a class-based system would impact total enrollment figures among various minority groups.

College Enrollment Rates by Race and Class:

College Enrollment Rates for Recent High School Graduates

2008 2013
All 68.6% 65.9%
High income 81.9% 78.5%
Middle income 65.2% 63.8%
Low income 55.9% 45.5%

The analysis is based on U.S. Census Bureau data. For the above comparisons, the ACE study defined low-income families as those from the bottom 20 percent, high income as from the top 20 percent and everyone else in the middle group.

…one possible theory offered by the analysis to explain the drop is that the perceived cost of college may be the issue at play here. “The rapid price increases in recent years, especially in the public college sector, may have led many students — particularly low-income students — to think that college is out of reach financially,” the report says.

“These data are even more worrisome with this fact in mind: while the percentage of low-income students in elementary and secondary schools is increasing, the percentage of low-income students who go on to college is falling,” the analysis says. “Said a bit differently, at the same time that low-income individuals are enrolling in college at lower rates, the majority of young adults in the precollege education pipeline are from those same low-income communities.”

While enrollment for “low income” students of all races has declined, enrollment rates for black students has been increasing:

casselman-college-race-1

Graduation rates are still much lower for black students, but affirmative action predominantly affects acceptance, not graduation.

A college degree remains the best investment a person can make in themselves. The problem is not getting minority students into college, it is getting “low income” students (using net family wealth as the main consideration) of all races to graduate college. Admittedly, affirmative action is only a small part of the solution. But due to its linkages to the politics of division, and that the current model seems to address an problem that no longer exists (low enrollment rates for minorities), affirmative action should be changed to a class-based system.

Regardless of how the current Supreme Court rules on the appeal of Fisher v. University of Texas, it would be interesting to see how a class-based affirmative action system played out at local or state levels. Such tests in our laboratories of democracy would likely yield results that benefit all American’s.


Leave a comment

Transparency Report: Closing the Rift Between What the UN Knows and What the UN Does

fdrquote

Quote, FDR Memorial, Washington D.C.

Original article:

He [Current General Assembly President Mogens Lykketoft] also touched on the issue of Security Council reform, saying the subject was “of central importance to a large majority of the Membership” of the UN, and that the General Assembly had decided to immediately continue the intergovernmental negotiations on Security Council reform in its 70th session.

Mr. Jürgenson [Vice President of ECOSOC] said that the relationship between the Charter bodies of the UN should be revitalized.

“The changing nature of conflict, from inter-State wars to complex civil conflicts that are intractable and reoccurring, highlights the fundamental link between sustainable development and lasting peace,” he said.

ECOSOC and the Security Council, he said, can interact on a regular basis on issues of concern to them both, from the promotion of institution building and improved governance to the consequences of economic and financial crises on global stability and the impact of environmental degradation on weakened societies.

“On each dimension of sustainable development, economic, social or environmental and on their contribution to the overall objective of peace, the UN development system, under the oversight of ECOSOC, has a lot to contribute,” he said. “The Economic and Social Council can be the counterpart of the Security Council to embrace a truly holistic approach to peace and security, an approach that world leaders have recognized as the only one which can lead to sustainable results.”

Human rights theory recognizes the broad array of human rights (economic, social, cultural, political and civil) are mutually dependent. Furthermore, certain rights, such as civil and political rights, create the enabling environment needed for people to claim other rights / hold violators accountable.

Any society that prioritizes the human rights of all its citizens will, in time, experience a virtuous cycle of sustainable human development and “positive peace“. In contrast, a society that “tolerates” certain human rights abuses in the name of security / stability greatly risks further restrictions of other rights; one rights violation invites others, and the vicious cycle of repression, poverty, and conflict emerges.

The human rights based approach to development therefore recognizes the interdependence of ostensibly separate U.N. operations. Specifically, preventative action–natural disaster preparedness and conflict prevention–feature prominently in development efforts.

The UN Development Programme (UNDP), the UNs primary development policy body, uses the slogan “Empowered Lives, Resilient Nations”. “Empowered lives” refers to upholding human rights obligations and consultative policy-making–enabling people in the developing world to be active participants in their country’s modernization. “Resilient nations” refers to conflict prevention and natural disaster mitigation, reasonable welfare programs, and the social cohesion and institutions needed to resolve internal grievances peacefully.

Of course, prevention and preparation only work at certain points during disaster response. Conflicts in full swing must be addressed decisively or they will fester and devolve. Countries that do not amply invest in natural disaster preparedness must bare huge rebuilding costs (this is not just a poor country problem, think about the devastation caused in the U.S. by Hurricane Katrina and Super-Storm Sandy).

Addressing issues once they have reached catastrophic levels is much more expensive than investment in prevention / mitigation. The current model–ignoring warning signs followed by a too-little-too-late response–strains humanitarian aid budgets, resulting in the need to make untenable, short-sighted decisions that perpetuate future crises.

Whenever a capable, trustworthy partner exists on the ground, the international community should not be constrained by short-term financial considerations. The world’s poorest countries should not be consigned to larger futures bills, social problems and insecurity because of a failure of leadership in global governance.

The international community’s inability to adequately address today’s problems stems primarily from two sources. One is short-sighted decision making due to financial constraints. The second is the ineffective structure of the U.N.S.C.

Here are a few suggestions to make the U.N. more responsive.

1) UNSC Reform:

The inability of the U.N.S.C. to preventatively defuse conflicts, due to concerns over “national sovereignty”, condemns large groups of people to a future of conflict and economic decline. Conflict does not know national borders, leading to spillover conflicts that affect whole regions. Even once resolved, post-conflict countries are susceptible to sliding back into conflict. Taken together, these factors show why an inability to deal with one problem proactively can result in long-term instability for a whole region.

This issue gets to the root of the power struggle between the permanent members of the U.N.S.C. that champion human rights / democracy (U.S., Britain, France) and those champion national sovereignty (or more specifically, the ultimate supremacy of national sovereignty, even in instances where the Responsibility to Protect should clearly be invoked)–China and Russia.

Those opposed to “Western” values believe promoting “human rights” is just a way for America to impose its values abroad. I would contend human rights represent values that all people desire, by virtue of being human. Reforming the U.N.S.C. to give a General Assembly super-majority the power to overrule a U.N.S.C. veto would reveal which side of the argument is correct. I would bet the global majority would almost always land on the side of taking action to defend human dignity against any who would challenge it–terrorist or authoritarian ruler.

As the world’s largest military and a veto-possessing permanent member of the Security Council, America on the surface has the most to lose from such a reform. This is precisely why America must lead this push; if we champion this brave and uncertain approach, it would ultimately lead to a much more effective and timely defense of the very principles we hold dear. By loosening our grip on power, we would actually achieve our desired aims through a democratic process–what could be more American than that? 

Human rights violations lead to revolution and conflict, during which legitimate opposition is branded “terrorism”. Inaction by the international community leads to “hurting stalemates” and power vacuums that are filled by opportunistic extremist groups. Authoritarian governments then become the more tenable option, and their “fighting terrorism” narrative becomes self-fulfilling (despite the fact that often their abusive actions led to the uprisings in the first place). Failure to reform means we are OK with this status-quo–we should not be.

During the 70th session of the UN General Assembly, many countries called for U.N.S.C. reform. When such disparate countries with differing needs use their moment in the global spotlight to promote this common cause, it is a message that should be taken very seriously.

2) Development Aid Smoothing

This is admittedly a less developed plan, as I am no financial economist. But it remains clear to me that the world needs some sort of mechanism to smooth development aid for the world’s Least Developed Countries.

We see it time and time again–poor countries slowly slide into worsening conflict or are devastated by predictable natural disasters because:

a) The LDCs do not have the resources or capacity to address these issues preventatively;

b)  The international community cannot muster the funds, as they are all tied up in long-term humanitarian missions (likely because not enough resources were invested preventatively elsewhere–you can see why there is never a shortage of disasters, we ignore budding issues to address full blown ones. By the time those full-blown issues are under control, the ignored budding issues have festered into the new issue de jour).

The continued inability of the international community to address problems before they get worse is not only financially short-sighted, it is a failure of the U.N’s mandates and fuels the perception (and increasingly the reality) that international community is incapable of addressing the problems of the 21st century. 


Leave a comment

Economic Outlook: For-Profit Failures Further Support Free Community College Plan

Your Student Government '13 to '14

New Research:

On Thursday (9/10), two researchers — Adam Looney of the Treasury Department and Constantine Yannelis of Stanford University — released an analysis of a new database that offers much more detail. It matches records on federal student borrowing with the borrowers’ earnings from tax records (with identifying details removed, to preserve privacy). The data contains information about who borrows and how much; what college borrowers attended; their repayment and default; and their earnings both before and after college.

the data suggests that many popular perceptions of student debt are incorrect. The huge run-up in loans and the subsequent spike in defaults have not been driven by $100,000 debts incurred by students at expensive private colleges like N.Y.U.

They are driven by $8,000 loans at for-profit colleges and, to a lesser extent, community colleges. Borrowing for both of these has become far more common in recent years. Mr. Looney and Mr. Yannelis estimate that 75 percent of the increase in default between 2004 and 2011 can be explained by the surge in the number of borrowers at those institutions.

It’s not hard to see why. The traditional borrowers from four-year colleges tend to earn good salaries out of college and pay back their loans, even during the recent years of economic weakness. The typical borrower who left a less selective four-year college in 2010 earned $35,000. For those leaving more selective colleges, the figure was $49,000. Those salaries obviously aren’t lavish, but they’re high enough to let most people meet their initial loan payments — and they tend to lead to bigger salaries in later years.

Borrowers at for-profit and community colleges, by contrast, earn low salaries — a median of about $22,000 for those exiting school in 2010 — and have had difficulty paying their loans.

The new findings are consistent with earlier data — such as statistics showing that default rates are actually lower among borrowers with large loans than among borrowers with small loans.

But the new data, which goes back two decades, shows how much the landscape of borrowing has changed. Today, most borrowers are older and have attended a for-profit or community college. A decade ago, the typical borrower was a traditional student at a four-year college.

Why did the face of borrowing change so rapidly in just a few years? During the recession, millions of students poured out of a weak labor market and into college to improve their skills. Historically, these students would have gone to community colleges. But with state tax revenues taking a nose-dive, community colleges were starved for funds and unable to expand capacity to absorb all of the new students. Students took their Pell Grants and loans to for-profit colleges. Enrollments at these schools spiked, and so did borrowing.

Behind the increase in for-profit college loan defaults is an underlying problem. How did these for-profit schools become so prominent so quickly? During the Great Recession, there was a spike in demand for schools where people could acquire marketable skills cheaply. This is exactly what economic theory told us would happen:

  1. With a larger pool of people looking for work (higher unemployment), employers could be more selective, requiring greater credentials for a given job than they otherwise would have been able to.
  2. As the labor market worsened, the “opportunity cost” of obtaining required skills (foregone wages) decreased.
  3. As people’s income decreased (both as a result of the recession, but also part of a long-term trend of stagnant median incomes versus increasing tuition costs), demand for the “inferior good” (in this case, for-profit and community colleges) increased.

 

Compounding the issue, many War on Terror veterans we’re returning home, with GI Bill tuition-assistance in hand but little idea of what to do with it.

At the same time, the recession resulted in lower tax receipts, and municipal and state budgetary restraints became more acute. Instead of increasing funding to deal with the predictable influx of students, community colleges faced budget cuts. The resulting surplus of students was readily snapped up by for-profit colleges.

For-profit colleges are, on average, four times more expensive than community colleges, and return poorer graduation rates and career outlooks. In other words, for every one student the federal government paid for to go to a for-profit school, it could have sent four students to community college. Furthermore, those four students would be more likely to graduate and have better career prospects.

People have different reasons for wanting to attend community college. Some people want to learn a specific marketable skill, with no intention of pursuing a bachelors degree (or beyond). Therefore, in order for community colleges to be eligible for new proposed federal subsidies, they should have to offer specialized vocational training programs.

For other people, community college is a stepping stone towards a more advanced degree. For these students, a free community college option would allow them to find out if “college is for them”, without taking out loans (I would argue that the absence of debt itself would lead to better academic outcomes). Another requirement for receiving expanded federal assistance should be making it easier to transfer community college credits to four-year college.

Of course, it is not solely up to community colleges whether four year institutions accept their credits. The Federal Government could, however, use the power of the purse and scale grant eligibility based on a four year school’s willingness to accept credits from community colleges. I bet community college credits would become more transferable if this were the case…

Perhaps some of these reforms are already baked into the Obama plan–if so, good. Either way the government, with the assistance of academic and private sector partners, should develop guidelines to help community colleges meet technical program and transfer-ease requirements.

With these requirements are met, community colleges could better serve their two target groups–returning adult-students looking for technical skills, and out-of-high-school prospective college students who think they want to pursue a bachelors degree, but do not have the conviction and/or financial resources to jump right into a four year college.

If properly tailored, a tuition-free community college plan would not be a “hand-out for community colleges”. Rather it would be, in large part, a transfer of funding from for-profit (which rely almost exclusively–86% of revenue–on federal grant money to operate) to community colleges, in exchange for reforms that allow community colleges to better serve their students.

Some figures here might help put this “transfer” into context. Obama’s community college plan calls for $1.4 billion in funding in 2016 and $60 billion over the next decade. Compare this to the $32 billion the Department of Education spent on for-profit grants and loans from 2009-2010 alone

Isn’t better educating more people, for far less money (per person), exactly what student aid programs should strive for? Now, to be sure, pushing more people towards community college would increase the cost of the tuition-free plan. As many people have pointed out, to make the plan less costly tuition assistance could be reserved for less wealthy applicants.

Follow

Get every new post delivered to your Inbox.

Join 2,945 other followers