Articles

Student Loan Debt: This Generation’s Albatross

CARLY MORGAN

MANAGING EDITOR

Comprehensive discussion of today’s American economy would not be possible without some treatment of college tuition, the federal loans granted to alleviate its soaring cost, and the consequent burden of student debt under which the vast majority of college graduates are now entering the workforce, and adult life.

Federal lending to college students to help cover the costs of college tuition dates back to 1958. Shortly thereafter, the role of federal lending in college financing expanded, when the Higher Education Act of 1965 established the federal government as the primary provider of student financial aid. This is to say that federal student debt is not even sort of a recent phenomenon. But you might not realize that given the fervency of the discussion surrounding it today.

That fervency is rooted, in part, in the growth of student debt that characterized the turn of the 21st century. Between 2000 and 2014, the total volume of federal student debt increased by more than 300 percent to over $1.1 trillion, surpassing credit card debt as the largest source of consumer indebtedness, second only to mortgages. And how many headlines like this have you seen in newspapers and television broadcasts over the last few years: “Graduate Cannot Pay Off $100,000 Student Loan Debt After Earning Degree in Flying Squirrels”? Stories like that seem to pop up all the time and, regardless of their representative authenticity, have certainly had a hand in shaping the national dialogue about cost and relative value of a college education.

First, a little background: federal student lending programs were first established in 1958 under the National Defense Education Act. The legislation was passed in response to mounting concern throughout the country that American scientists were not fully equipped to compete with their Soviet counterparts (who had just launched Sputnik, the first-ever satellite, in October of the previous year). To that end, the NDEA incentivized specific courses of study, offering grants, scholarships, and loans to high school students showing promise in math, science, engineering, and foreign language—all fields of study likely to produce national defense personnel.

The broader goal of the NDEA was to provide low-cost loans to high school graduates who wanted to attend college. This program was expanded, most notably, in 1965, with the aforementioned Higher Education Act, part of President Lyndon Johnson’s Great Society domestic agenda. The goal of the HEA was to improve social mobility and equality of opportunity by making college more accessible and affordable.

It was at this time that the Federal Guaranteed Student Loan program was introduced. Under this program, loans to finance higher education originated with, and were funded by, private lenders, but their repayment was guaranteed by the federal government in the event that the borrower defaulted. Today, the loans that originated with this program are better known as Stafford Loans, and although the terms of these loans have evolved since their inception (more on that in a minute), the Stafford Loan Program remains the largest source of low-interest college loans.

Because Stafford Loans are backed by the federal government, they are available at low interest rates regardless of a borrower’s credit history or economic security. When they were first implemented, Stafford Loans were strictly need-based, and eligibility was determined by a financial needs test. Originally, all Stafford Loans were subsidized, meaning that while the student was in school, the government paid the interest on their loans.

In 1972, the Basic Educational Opportunity Grant, known today as the Pell Grant, was created in order to help low-income students attend college. Unlike student loans, Pell Grants do not require repayment. In that way, they are considered a sort of “fiscal stabilizer,” meant to mitigate the effects of tuition increases on lower-income families. The original intent of the Pell program was to protect low-income students from having to take out loans in order to attend college. Remember this, because it’s going to be an important detail later on in the student debt conversation.

Unsubsidized Stafford Loans were introduced as part of the Higher Education Amendments of 1992. This development greatly expanding the number of eligible borrowers by eliminating the requirement of demonstrable financial need. The Higher Education Amendments of 1992 also created the Federal Direct Loan program, which provided low-interest student loans that originated with the US Department of Education, instead of with a private lender. The Federal Direct Loan program has accounted for all government-backed loans since 2010, when the federal government took over loan origination from banks by eliminating the “guaranteed’ loan. Today, all federal student loans originate with the government. According to the Federal Reserve, as of February 2015, the United States government owned roughly 86 percent of the $1.5 trillion of aggregate student loan debt.

Also in 2010, student debt surpassed credit card debt and became the largest nonmortgage source of consumer indebtedness. Nearly 70 percent of students graduating with a Bachelor’s degree are also graduating with some amount of debt. According to the US Department of Education, that translates to more than 43.3 million people with student loan debt, as of the 2015-16 academic year.

The macroeconomic and sociological implications of these figures are enormous. In recent years, student debt has blocked the smooth transition of many college graduates into a middle class adulthood. While addressing a crowd at LaGuardia Community College earlier this year, New York Governor Andrew Cuomo said of the growing burden of student debt, “It’s like starting a race with an anchor tied to your leg.”

Student loan borrowers are less likely to buy cars, purchase homes, get married, or start families. Home ownership among Americans under age 35 has been steadily declining since 2007, and a July 2017 report from the Federal Reserve Bank of New York shows that nearly half of all 23-25 year-olds are still living with their parents. A study from the Pew Research Center found that the share of 18 to 29 year-olds who are married dropped from 59 percent in 1960 to 20 percent in 2010. And data from the Center for Disease Control indicates that, after years of consistent decline, the US birth rate reached an all-time low in 2012, and has yet to show any sign of a trend reversal. (It is worth noting here that lower birth rates are often associated with economic recession; what makes this decrease unique, however, is that while the economy has since recovered, the birth rate has not.)

A 2012 study published in the Harvard Law Review attributes the problems of higher education economics to four specific causes: increasing tuition; rising indebtedness; mounting defaults; and declining returns.

So, is it worth taking on that much debt—or any debt at all—in order to attend college? Will the eventual payoff ever really outweigh not just the cost of attendance, but the opportunity cost of forgoing all those years in the workforce and instead spending them in a classroom? Further, will the growing glut of college graduates actually enhance the career prospects of those who attend (and graduate), or have we saturated the market to the point of devaluing the Bachelor’s degree? And finally, is a college degree still the great equalizer it was once thought to be, fulfilling the promise of meritocracy?

In the years during and immediately after the Great Recession, it became almost fashionable to suggest that the American economy was in the midst of a “college bubble”; that the state of student debt constituted a “crisis” akin to that of the credit crisis and housing bubble that colored the first decade of the century. Central to these claims is the assumed validity of certain implicit assumptions: that the longterm benefits of a college education do not outweigh the short-term costs of attendance; that recent and prospective high school graduates have misjudged the value of a college degree; and that nothing can be done to correct the current trajectory of higher education economics. But do any of these claims hold water?

Let’s start by looking at the overall increase in student debt. When you read or hear reports of the rapid growth in aggregate student loan debt throughout the beginning of the 2000s, it makes it sound like tuition prices are rising at a rate so astronomical as to be out of reach for all but the economic elite. But the reality of the growth in student borrowing has to do with the number of students taking out loans; not the amount of money each one requires. In fact, since 2000, average per-student borrowing has remained relatively stable. What hasn’t changed, however, is the overall number of borrowers. And that’s thanks, in part, to the Great Recession.

Higher education is a countercyclical industry, which means that demand for it rises even amidst adverse economic conditions. In fact, during every period of recession in America since the 1960s, college enrollment has increased. Economic recession is usually accompanied by a higher-than-usual unemployment rate; as a result, the opportunity cost to attend college is dramatically lessened for a larger segment of the population. Plus, the lowered and stagnant wages of a weak economy increase financial pressure, thus motivating more people to seek methods of supplementing their income. A longterm investment to that end is a new credential.

Further, the enrollment increase precipitated by the recession was concentrated among nontraditional students. In the years during and immediately following the Great Recession, nontraditional students represented almost half of all new borrowers. When considering the growth of aggregate student debt in recent years, this has some important implications.

Nontraditional students have come to represent a significant portion of the college population. In fact, by some estimates, nearly as few as one in five modern day college students actually qualify as what we would consider “traditional”: an 18-22 year-old full-time student who enrolled in college right out of high school and is financially dependent on their parents. Nontraditional students, on the other hand, are defined by the National Center of Education as meeting any one of the following criteria: older than 18 at time of initial enrollment; takes classes part-time; works full-time; not dependent on parents for financial aid purposes; has one or more dependents other than a spouse; is a single parent; or does not have a high school diploma.

Since nontraditional students are more likely than their “traditional” counterparts to be independent from their parents for financial aid purposes, they’re subject to higher borrowing limits; and often, their financial independence is part and parcel with less parental support than traditional students experience, so taking advantage of those higher loan limits is not uncommon. But what also is unique about the “nontraditional” higher education experience is where it is most likely to take place: at for-profit and/or open-access institutions, and community colleges.

According to the American Association of Community Colleges, the average age of a community college student is 29. At for-profit institutions, just over three out of every four students are over the age of 24. As the numbers suggest, these institutions are perhaps better-suited to the nontraditional student population than to the pool of recent high school graduates. And that has to do with elasticity of supply.

In the November 2013 edition of The ANNALS, an academic journal published by the American Academy of Political and Social Science, Andrew Barr and Sarah E. Turner note that short-term labor fluctuations, such as those experienced during and after the Great Recession, have little to no impact on enrollment figures at research universities and liberal arts colleges; such institutions simply lack the capacity to respond to changes in enrollment demand. Conversely, at community colleges and open-access for-profit institutions, “recessionary conditions have a substantial impact on the returns to relatively short-duration enrollment and training opportunities.” The short-term career and technical training programs offered by these institutions are especially popular among nontraditional students.

Unfortunately, these programs also have relatively low labor market returns. For one thing, for-profit universities have traditionally low completion rates. A study released by the Education Trust, a nonprofit advocacy organization, indicated that only 22 percent of full-time, degree-seeking students at for-profit institutions graduate, compared to 55 percent at public institutions and 65 percent at private nonprofit institutions. But even for those who do manage to complete their program of study, the return on investment is still likely to be underwhelming. As Sara Goldrick-Rab points out in her book Paying the Price: College Costs, Financial Aid, and the Betrayal of the American Dream, for-profit institutions produce “degrees that employers value far less than community college degrees, often equating them to high school diplomas.” A 2014 report from the US Department of Education adds hard data to that rather harrowing comparison, as it showed that three out of four for-profit higher education programs produce graduates who go on to earn less money than, on average, than high school dropouts.

Further, tuition at a for-profit university is, on average, six times higher than community college tuition, and twice as high as the average tuition rate for a four-year public university. Given the higher cost of attendance, coupled with the fact that nontraditional students typically come from lower-income families, it’s no surprise that more than nine out of ten students at for-profit institutions have to take out loans to finance their education. But those loans are made all but impossible to repay when students leave college with a pile of debt, and no degree to show for it.

What makes this situation even more remarkable, however, is the amount of money flowing from the government to these institutions. Data from the US Department of Education indicates that every for-profit university in the country receives at least 70 percent of its funding from federal sources, much of it in the form of student loan packages. And even though those federal dollars are issued under the assumption that they will be paid back by the borrower, this is complicated in the for-profit sector by the fact that nontraditional borrowers—who, in case you’ve forgotten, are disproportionately concentrated in the for-profit and 2-year sectors—experience the highest rates of default on their student loans. In fact, research published by the Brookings Institution in September, 2015 showed that among students who left college in 2011 and subsequently defaulted on their loans (meaning, they didn’t make any payment for at least 270 days during their first two years of repayment), 70 percent were nontraditional borrowers.

Student loans happen to be a substantial source of revenue for the federal government: between 2013 and 2022, the federal student loan program is projected to earn an average of $18 billion per year. When it comes to defaulted loans in particular, the government collects approximately 120 percent of each one. If a borrower defaults on a federally guaranteed loan, it’s the taxpayer that gets stuck picking up the tab. And considering that nontraditional students tend to borrow tuition money and default on their student loans at higher rates than their traditional counterparts, when those students are preyed upon by for-profit and open-access institutions promising job returns that they have been proven statistically incapable of delivering, it affects all of us.

If there exists a “crisis” in student debt, then it exists almost entirely in the for-profit and open-access sectors of higher education. There, students—many of whom come from low-income backgrounds looking for marketable, job-specific training—are often misled by advertising that promises an affordable training program that will lead to better job prospects, and a better economic future. What those students end up with more often than not, however, is debt, full stop. Goldrick-Rab writes, “Without a college degree, and having forgone years of work experience and seniority to attend college, former undergraduates without degrees can find only low-paying jobs. Even paying off a modest amount of loans puts them in a compromising condition. This is the real student debt crisis.”

Whether or not a college education is a worthwhile investment greatly depends on the quality of that education. Poor labor market outcomes—and, consequently, loan outcomes—are strongly associated with attendance at institutions with low rates of completion and job placement.

The value of a college education, however, is still essentially irrefutable. On one side of the “Is college actually worth it?” debate are those who will argue that rising tuition prices coupled with declining wages in entry-level positions means that going to college is no longer an economically sound investment. However, in a 2012 article appearing in the Journal of Economic Perspectives, authors Christopher Avery and Sarah Turner cite decades’ worth of data to support their claim that “the earnings premium for a college degree relative to a high school degree nearly doubled in the last three decades.”

And although the Great Recession took a heavy toll on many sectors of the economy, this “earnings premium” did not decline as a result; in fact, Avery and Turner write, “the alternative to a weak labor market for college graduates [following the Great Recession] is a much weaker labor market for those without a college degree.” In short: Does economic downturn make it harder for a college graduate to get a job? No doubt. Does it make it even harder for someone without a college degree to find work? You bet.

So, while there is a measurable return on investment for those who pursue a college degree, the magnitude of those returns are highly conditional on the type of institution the student attends. A for-profit institution, for example, despite the higher sticker price, does not typically lead to a credential any more valuable than a high school diploma, while a degree from a nonprofit or public institution can still make for positive labor market returns, even if that education must necessarily be funded by student loans.

With the aggregate student debt ballooning at a seemingly alarming rate, the possibility of a “higher education bubble” is a prominent feature of most discussions of student debt in the United States. However, such rhetoric fails to acknowledge some crucial factors contributing to the growing amount of student debt. As already discussed, the spike in borrowing is attributable in large part to institutions that tend to enroll a disproportionate share of nontraditional borrowers, many of which were prompted to enroll by the economic conditions of the Great Recession.

In a 2015 study published by the Brookings Institute, Adam Looney and Constantine Yannelis write, “In 2000, borrowers from for-profit and 2-year institutions accounted for less than 35 percent of all borrowers; by 2014, the number of borrowers had more than doubled at for-profit schools and 2-year institutions, rising by 114 and 167 percent, respectively.” This increase contributed to higher aggregate debt not only because more college students means a larger metaphorical borrowers’ pool, but also because, as mentioned earlier, the students enrolling at these institutions are more likely to have to borrow money to cover tuition costs than students at private nonprofits and public institutions.

But these enrollment patterns alone are not enough to explain the rate at which student debt has increased since the turn of the century. Another contributing factor is the state of private lending markets during the Great Recession.

It’s no secret that the 2008 financial crisis, considered by many economists to be the worst of its kind since the Great Depression, severely weakened the banking industry, and ultimately gave way to the Great Recession. The 2008 financial crisis actually began in 2007 with the deterioration of the subprime mortgage market, and developed to a fever pitch on September 15, 2008, with the collapse of the investment bank Lehman Brothers. The subprime mortgage crisis forced banks all over the United States to write off billions of dollars in sub-prime loan losses; this, in turn, left banks without either the willingness or the solvency (or both) to continue lending money at the rate that they had been previously. Consequently, most banks tightened their lending terms and raised their lending restrictions to unprecedented levels, resulting in a rapid reduction in loan availability to both businesses and households.

This, of course, proved to be disastrous in its own right, as GDP declined in countries all over the world, and median household wealth in the United States fell by a staggering 39 percent between 2007 and 2010. But another, less talked-about effect of the financial crisis was the effect it had on how families funded higher education. The disappearance of private lending alternatives likely generated a whole new wave of demand for student loans, as high school graduates who would have once relied on their parents to secure funds for college found that they now had to do it themselves. Barr and Turner summarize this phenomenon succinctly, writing, “The growth in federal student loans may overstate the true increase in borrowing for students to attend college, if the increase in Stafford loans supplanted other types of loans, which may have been a relatively cheap source of credit for parents before the financial crisis.”

This means that the panicked tone of much of the mainstream coverage of student debt growth is likely misplaced, as the increase in student debt may reflect a shift in how families fund higher education more than it indicates a sharp increase in borrowing for college. A recent study published by the Harvard Law Review Association reports that between 1990 and 2010, federal borrowing increased by 319 percent, as students in 2010 borrowed an average of three times more per year from the federal government than students in 1990 did. However, the sole cause of this increase is not simply tuition price increases, as some are quick to suggest; it has a lot to do with the greatly diminished number of borrowing sources available to parents in the wake of the financial crisis.

Although this does suggest that the impact of tuition increases on student debt is often overstated, it certainly isn’t to say that the two are unrelated. While student steadily increased to become the largest source of nonmortgage household debt, state appropriations to higher education began a precipitous decline, falling 17 percent between 2007 and 2012. As public subsidies declined, reliance on tuition dollars increased, shifting the cost-burden of higher education from state governments, to individual students.

The decrease in state appropriations to public universities was yet another indirect result of the Great Recession. As Barr and Turner point out, the tightening labor markets, collapsed housing market, and overall wealth reductions that characterized the late 2000s and early 2010s “resulted in shrinking state revenues as the bases for income, corporate, and sales tax fell. An increased reliance on income taxes, changes in the set of items subject to sales tax, and reduced diversity of the tax base resulted in higher volatility in tax revenue during the last decade.”

Unfortunately, as state appropriations to higher education decreased, tuition prices continued to increase. And although the federal government is more generous now than ever in its provision of financial aid to students from lower socioeconomic backgrounds, the rise in unsubsidized borrowing (i.e., the iteration of student borrowing that is not need-based) indicates that many middle-class families are having trouble meeting the demands of the rising costs. This middle-class “squeeze” is mostly a product of the fact that, while the Pell Grant program has an annual expenditure of $35 billion (the highest it has ever been), the growth in Pell-related spending hasn’t kept up with the increased demand for financial aid, meaning that the purchasing power of the grant has declined drastically over the last couple of decades. Sara Goldrick-Rab writes, “In 1990, only the poorest quarter of American families had to pay much more than 20 percent of their annual income for higher education. Today, 75 percent of families pay at least that—after all grants are distributed.”

The many, measurable improvements that a college degree has the potential to make on a person’s economic (and, indeed, overall) well-being is evidence that higher education is still a worthwhile investment, despite rising tuition costs and the adverse macroeconomic effects of widespread student debt. However, it seems unconscionable that so many American families—three out of every four, to be exact—should have to devote the types of financial resources necessary to secure a better future for their high school graduates. Today, college is all but essential for lower-income young adults who wish to move up the socioeconomic ladder. By improving the way higher education is financed, the United States would be poised to become the type of meritocracy in which hard work and perseverance really do make for a better life, and economic well-being and upward social mobility are not contingent on the depths of one’s pockets.