On the future, MOOCs, tenure, etc.

Last week I spoke on MOOCs in an online seminar with faculty and staff of about 40 different schools. The consensus among that group seemed to be that developing in-house online programs would be to their benefit as institutions. In other words, many of them are looking to create some form of digital teaching program in order to have a version of that product in the case that it gets higher demand from students (or parents.) Many of them are also excited by the pedagogical idea of using digital platforms differently.

This Rutgers University statement on audio/visual recording is very interesting.  In part, it is admirable in that it finds compelling legal and pedagogical issues to recommend against allowing widespread recording (and sharing via social media) course materials or classroom activities. Namely it focuses on the copyright issues that might arise if, for instance, Youtube becomes a popular place for students (or faculty) to post lecture videos or even student discussions. On the one hand, the classroom is seen as a protected space in the copyright code. As I point out below, the recent GSU lawsuit provides a case in point. On the other hand, taping conversations that go on in a classroom, and making them public, has significant pedagogical consequences. Even in the best of scenarios – where students welcome being taped, raising no privacy concerns – we risk having them act as if every meeting was a small episode of Big Brother, where the watching only makes them perform more fully for the camera, rather than engaging in the risky, personal reflection that leads to real learning. The point made by the Rutgers faculty is crucial – public conversations are not always as productive, especially for students who are just trying to figure out what they believe and what they want to learn. For them, the privacy of the classroom, and instructors’ responsibility in the conversation are essential elements for successful, critical pedagogy. Rethinking copyright and privacy seems essential for how we move forward in the open, online environment.  But so is understanding how we value our own teaching and research – and how we expect others to value it.

In this, it is hard not to read this statement in the context of the broader conversation about “disruptive innovation” and venture capital’s version of rethinking the classroom for a digital age.  This discourse is seductive because it is based on a certain promise: MOOCs and even lecture capture could be very useful for generating conversations beyond our individual classrooms, perhaps drawing our students into that broader conversation through the dynamic forums Standford, Harvard, and MIT have created in their MOOC platforms. So it is possible that educators could reach and interact with far more students than they do now – to the benefit of both themselves and their students. This, in turn, sounds very good to the U.S. education department as it is underfunded and under fire.

To them, the learning possibilities are less important than the economics of scale. MOOCs seem important to business minded administrators (and their newspaper editorial gurus e.g. Thomas Friedman) because they seem to solve the entire cost problem in education in one swoop: it is made technologically glamourous, infinitely productive, highly demanded, and incredibly cheap. It has never been a better time to be an educational entrepreneur because everything is on the table. Every other industry has had active intelligent laborers replaced with machines that could standardize their human actions, turning our jobs over to robots (or at least threatening to): we can mechanize factories to lower the labor costs of cars, why is it taking so long to do away with these pesky professors? As Andrew DelBanco recently quoted Richard Vedder, “With the possible exception of prostitution . . . teaching is the only profession that has had no productivity advance in the 2,400 years since Socrates.”

For them, the promise of MOOCs upsets much of the present infrastructure of education, especially the tenure system itself, which is the precarious, but fundamental subsidy of the entire present system. Lost in the conversations about both academic publishing and the arrival of MOOCs is the way the present infrastructure was made possible by these broader subsidies. It is true that an increasingly smaller number of faculty are put on the tenure track, but every year for the better part of the last two decades, we have turned out a fresh new crop of people hoping to secure a place in that realm, working relentlessly for free with the brass ring of tenured employment dangling ever more remotely in front of them. Those that don’t succeed at first, continue teaching as adjuncts, exploiting themselves for the larger goal of educating the next generation of Americans.

Even leaving aside the idea that shared governance might actually be more efficient (and therefore tenure a better political economic model on which to base the university) the security and responsibility of tenure is a priceless motivation of the higher education system. Removing it will create untold havoc in the intellectual economy that serves as the foundation of MOOCs. Many people have pointed out that MIT, Harvard and other elite schools are basically using their own well established brands and resources, currently with little hope of turning a profit on these activities. Already there are signs of people deciding not to go back to school, with many graduate programs reporting lower enrollments and law schools in unprecedented decline. How will MOOCs (or education in general) function without tenure as a subsidy? How will academic publishers continue to produce knowledge? The boycotts of Elsevier and others are small potatoes compared to the decimation of the free academic labor pool caused by truly unleashing market forces on higher education. Gerry Caravan had a different way of phrasing this, which got some attention back in February:

http://gerrycanavan.wordpress.com/2013/02/18/some-preliminary-theses-on-moocs/

The whole post is worth reading, but this part stuck out to me especially:

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century

Related to MOOCs is the struggle over scholarly communication more generally. Academic fair use and the ability of libraries to create digital archives are under direct attack by the academic publishing industry. But it is not that industry alone. Take the recent lawsuit brought by Sage, the University of Oxford and several other major academic publishers – publishers who make their money off the very labor we all do, virtually for free, but really subsidized by the tenure system. Because a condition of our job is that we publish, we do so with little expectation of direct economic gain from these activities. These publishers sued librarians and faculty – as individuals – for the policies they had in place around online course reserves. The library provided some digital course reserves and faculty also provided some digital copies of articles and book chapters through learning management systems (LMS) – like the MOODLE platform many of us use to provide course materials. Since Georgia State University – as a public institution – has sovereign immunity in these cases, the publishers instead sued the individual administrators within the system, holding them liable for all of the activities around sharing

Publishers accused the schools of making greater use of their materials than was allowed by copyright law. But it is really about the future of academic publishers in a world where we can share digital materials far more seamlessly than ever before. And if we can be sued for sharing academic materials, what is to stop us from getting sued if a student videotapes us showing a movie clip and it becomes popular enough to attract attention? The judge in the case found that only 5% of the cases the publishers cited would fall out of even a conservative definition of fair use, and thus that faculty and librarians are largely on the right side of the law. Yet the publishers have said they were going to appeal the case, not to protect their own economic interests, but on the advice of the authors of the texts in question, who would like your students to pay their residuals check every time you have them use that article (thank you very much.)  In short, those of us in academia and academic publishing are living in a den of vipers, in which each side is willing to strike the other down for the few remaining morsels of public higher education funding.

If we can’t put course reserves on library websites, how will students not have to pay for them on MOOCs? And what does that mean in terms of it being “Open?”  In some cases, getting students to pay for the text is the primary goal of the MOOC : The economics professor at U.C. Irvine who was removed as instructor from his own MOOC, in part because he was insisting that, to participate in the MOOC, the 40,000 or so students would need to buy his $90 economics textbook. The flipside of this is what the AAUP has often feared will happen: once our lectures can be recorded, once our MOOCs created, what is to stop universities from repackaging them, assuming them to be “works for hire,” leaving faculty no claims to copyright in their teaching materials.

MOOCs might present some interesting possibilities for the students that will soon be graduating into our universities and colleges – and those who, while qualified, will not be able to afford a formal education because support for the public system is itself under fire. Since so much of their conversation with their peers will be mediated by some form of audio-visual artifact, it is intriguing to think about what the circulation of our classroom discussions might mean. It is difficult to chart a path through this landscape that doesn’t ultimately lead to a cliff on the other side. Luckily my colleagues – like Bryan Alexander – are on the case.

I’m not sure if the Rutgers Senate statement strikes the right balance, but at least it errs on the side of giving students and faculty control over how recordings will be used. Now they just need a statement of imagination outlining all the possible ways it could help enhance learning if recording were allowed.

Futurists vs. Spies

Sigh.

I like the idea of Wikileaks – and I believe Manning is being held illegally – and am usually at least amused, if not excited, by the antics of Anonymous.  But it really sounds like their most recent cache of hacked emails is a desperate ploy to keep Assange and his associates on the media radar.  Ironically, they seem to have done so by taking seriously a firm that had been trying for some time to get on that very same radar.  As Max Fisher writing in The Atlantic put it earlier today,

Stratfor is not the shadow-CIA that Wikileaks seems to believe it is, but much of the blame for this mistake actually lies with Stratfor itself.  The group has spent over a decade trying to convince the world that it is a for-hire, cutting-edge intel firm with tentacles everywhere. Before their marketing campaign fooled Anonymous, it fooled wealthy clients; before it fooled clients, it hooked a couple of reporters.

I think The Atlantic, The Christian Science Monitor, and others who’ve criticized the revelations of the latest leak may be overstating the case (for instance, according to The Statesman, emails included in the leak reveal that Pakistani ISI leaders were in contact with Osama Bin Laden); but for the most part, this seems like more of a takedown of a low level player in the intelligence community.

On the other hand, Stratfor, which operates here in Austin, is criticized by several of the naysayers as being farce because it relies on open source data (a critique people have made of the firm for more than a decade). But as several of Bryan’s posts point out, there is nothing necessarily obvious about the patterns and even the information that emerges from the vast data available openly on the web.  The Atlantic itself reported on an AP story in November that described the CIA’s social media monitoring center, where analysts try to read 5 million Tweets a day.  And other private firms, like Recorded Future, are trying to create big data solutions to predicting events around the world by using openly available media (Open Source Intelligence, or OSINT) and sifting it through various semantic filters to find patterns.  I can’t reveal my source, but I also know for a fact that the Swedish Defense Research Agency has partnered with firms like Recorded Future to enlist their help in developing software and frameworks that are almost identical to what Stratfor claims it uses or has used in developing its intelligence.   I suppose the proof is in the pudding, but there’s nothing inherently faulty in the model they claim to use.

And in any case, the more troubling thing it reveals in the Wikileaks philosophy is the rather thin film sitting between the supposedly unique and isolated realms of journalism, intelligence, spying, academic futurism, and even corporate espionage – where sitting in one of these ill-defined categories is in itself an excuse for Anonymous and Wikileaks apparently because they have a different mission statement and political allegiances. If Coca-Cola can’t be bothered to find out the number of PETA supporters in Canada, and Stratfor has some low level interns it can put on the job instead, it is hardly a case of international significance.  I don’t approve of the actions of Coca Cola or Dow, but I don’t think contracting with them to do what amounts to basic research qualifies as a punishable offense.  If doing this justifies breaching Stratfor’s security and revealing internal e-mails to the world, where do they would draw the line?  It makes Wikileaks less a venue for revealing state secrets to the world and more of a website run by well equipped bullies with more fashionable ways of gathering secret information than even Stratfor (though one imagines Stratfor or Recorded Futures could probably make more sense of the 5,000,000 emails in aggregate.)

Still it does seem that the firm’s secondary specialty is self aggrandizement.  Stratfor founder George Friedman wrote a surprise bestseller  in 2009 where he took his futurist schtick to the next level: attempting to predict The Next 100 Years (excerpted here).  The book sounds impressive (or at least entertaining) and he followed its success with a more manageable time horizon: The Next Decade Though Friedman is obviously well read, according to Daniel Drezner’s review of the latest book, his predictions are basically meditations on two controlling factors:

In both books his prognoses are based on two factors that persist over time: geography and demographics. Geography is what you think it is–a country’s physical attributes and resources. For example, Japan’s island status and dearth of natural resources mean that it relies heavily on sea-lanes. Demographics change more rapidly than geography, but those changes take decades. So when Friedman observes that southern Europe is depopulating, he’s going to be right for quite some time.

Leaning on the se two factors, The Next Decade arrives at a few conclusions that might seem counterintuitive: The United States has devoted disproportionate resources to counterterrorism. China’s ascent has been exaggerated because rising inequality and slower economic growth will lead to domestic instability. Russia and Germany will become closer allies.

The belief that geography equals destiny, though, is complicated by the fact that geography is a constant–and constants are lousypredictors of change. Sure, Friedman sounds sensible when he forecasts that Japan, bristling at its reliance on the U.S. Navy to keep its sea-lanes open, will take a more aggressive approach to the Pacific Rim. This prediction also sounded reasonable two decades ago, when Friedman co- authored a book called The Coming War With Japan. But there are many aspects of world affairs other than geography–historical ties, technological innovation, and religious beliefs, for instance-that can influence governments.

These predictions are more along the lines of academic analysis of world history – an admirable enterprise, to be sure, but one which should teach its purveyor the dangers of making definitive statements.  His understanding of geography as a key to military and economic development figures into many historians frameworks – off the top of my head, it figures centrally into the late Giovanni Arrighi’s World Systems analysis in The Long Twentieth Century.  In that pathbreaking work (which inspired a spirited dialogue with David Harvey in the last years of Arrighi’s life) the relative geographical isolation of England and then the US allows it to devote its economic resources to a less scattered array of military hardware.  But even this is only one among many other factors – and nowhere near the only one Arrighi uses to explain the rise and fall of economic and political hegemony over the course of the last 500 years.  In other words, as one pithy review of The Coming War with Japan (1991) put it:

This one-sided, sensational book contends that a military confrontation between the United States and Japan is likely within the next 20 years. According to the authors, the issues are the same as they were in 1941: Japan needs to control access to its mineral supplies in Southeast Asia and to have an export market it can dominate. In order to do this, Japan must force the United States out of the western Pacific. There is little effort to explore the substantial differences between the 1940s and the 1990s.

In their defense, this seems to have been a popular theme in culture more generally.  Arrighi’s book (also published in the early 1990s) makes dramatic predictions about the rise and supremacy of Japan – predictions he had to revise in his follow up which explained the unpredicted (by him, anyway) rise of China.

So whether Stratfor is a good intelligence firm, a secret spy organization, or just another font of futurist analysis, we can probably assume that a good number of the 5,000,000 e-mails Anonymous hacked and Wikileaks has made available will be even less significant to the course of human events.  As with all such predictions, time will tell.

Testing the future with anecdote

Historian Rick Perlstein offers a thought experiment to test out several trends.

I like to imagine, as a thought experiment, the day, perhaps not too far off, when a Republican president nominates a Supreme Court Justice married to someone of the same sex, maybe even with the sanction of “orthodox” theology – with that gay Supreme Court justice casting the deciding vote that finally overturns Roe vs. Wade.

It’s a kind of design fiction for politics.

Ladies Home Journal FTW (on predictions)

In yesterday’s BBC News Magazine, Tom Geoghegan has a rundown of several predictions made by John Elfreth Watkins in the early 20th century.  Watkins wrote an article titled “What May Happen in the Next Hundred Years” for the Ladies Home Journal.  Geoghegan builds from an earlier overview of the article by Jeff Nilsson in The Saturday Evening Post, but also queries Patrick Turner from the World Future Society on how well these predictions were made.  Overall, he was quite prescient.  Moreover, he was very upbeat.  In Nilsson’s words, “Every one of his predictions involved an improvement in the lives of Americans. He saw only positive change in the new century. Today’s predictors don’t see the future so optimistically, but will they see it as clearly as Watkins?”  Some of those positive changes, predicted correctly:

  • Digital color photography
  • Rising height (correctly predicted at 2″) and increased life expectancy (lowballed: he says 50; it was 77) of Americans
  • Slower population growth
  • Mobile phones, networked computing, television, or something like them
  • Pre-prepared meals (some Hungry Man eaters may dispute this as necessarily “positive;” also he was wrong in that they unfortunately are not delivered to every household by pneumatic tube.)
  • Central heating and A/C
  • Hothouse vegetables, bigger fruit, household refrigerators, and refrigerated transport of produce across the country and the hemisphere
  • cheaper cars
  • airplanes, armored tanks, and high speed trains

What is more interesting to me, however, are the things he got wrong (and, perhaps, why).  In some cases, he might just be ahead of the curve – such as his prediction that all wild animals will be extinct (no mention of electric sheep).  In others, it is clearly a lot of political pressure that kept what was a reasonable expectation from taking hold.  Two stand out especially clearly.

The first is his prediction that, in Geoghagen’s paraphrase, there will be no cars in large cities. In Watkins’ words, “All hurry traffic will be below or above ground when brought within city limits.”  What this meant to Watkins was that people would walk, take moving walkways or trains around the city rather than driving.  This was a reasonable prediction – and would have been a reasonable way to build sustainable cities, were we at all interested in that.  At the end of the 19th and for much of the early 20th century, this was largely the trajectory, with cars being only a small component in the transportation ecosystem of major cities.  Electric streetcars, for instance, were a common utility run by municipalities across the country.  Among the changes necessary for the transformation to take hold: car companies like General Motors bought up street car systems (bought them up, that is, and eliminated them), and they were aided in their efforts by various laws, subsidies and eventually nationwide projects of investment.  As Guy Span summarizes this 20th century transformation:

While GM was engaged in what can only be described as an all out attack on transit, our government made no effort to assist traction whatsoever and streetcars began to fade in earnest after the Second World War. In 1946, the government began its Interstate Highway program, with lots of lobbying from GM, arguably the largest public works project in recorded history. In 1956, this was expanded with the National Interstate Highway and Defense Act. Gas tax funds could only be spent on more roads. More cars in service meant more gas taxes to fund more roads. And we got lots of roads.

More and better roads doomed the interurban electric railways and they fell like flies. Outstanding systems like the Chicago North Shore Line (which operated from the northern suburbs into Chicago on the elevated loop until 1962) were allowed to go bankrupt and be scrapped. The Bamburger between Salt Lake City and Ogden failed with its high-speed Brill Cars in 1952. Today, arguably only two of the vast empire of interurban systems survived: The Philadelphia and Western Suburban Railway – aka the Red Arrow Lines (now a part of SEPTA) and the Chicago South Shore and South Bend Railway (now state owned). And highways had everything to do with this extinction.

The United States government, state agencies, and local communities allowed these systems to fail. In the District of Columbia, Congress ordered the elimination of streetcars over the strong objections of the local owners and managers. The government was doing its part.

So let’s not forget the words of Charlie Wilson when asked if there were a conflict with his former employer (GM) on his possible appointment to Secretary of Defense in 1953. He replied, “I cannot conceive of one because for years, I thought what was good for our country was good for General Motors, and vice versa.”

In other words, whatever the benefits of cars over trains or streetcars, we’ll never know who could have won in that fight because, in all cases, the only way one or the other of these highly capital-intensive systems would become dominant was through public policy and state investment – and the palm-greasing and arm-twisting known as lobbying that make those things happen.  Watkins, in other words, wasn’t wrong in terms of technology: he just incorrectly predicted who would be able to graft the system better.  Streetcars, subways and intercity trains held the lead for the last quarter of the 19th and first quarter of the 20th century: then the power shifted to a new set of titans.  Go figure.  This should be a lesson to all the people who look today at Apple or Google for predictions of what the next 100 years will look like.  We should probably look downmarket for the people who will be willing to get their hands dirty with all the sick lobbying they’ll need to do to win the future.

Second, Watkins predicts that college education would be “free to every man and woman.  Here he was an excellent predictor of the post-industrial needs (and capabilities) of US society.  In fact every man and woman probably should have higher education – though what that means has certainly shifted over the past century.  We’ve come a long way from the levels of education at that point, and until recently, many European countries were close to providing this.   As Doug Henwood points out, it wouldn’t be much of a stretch to do this:

It would not be hard at all to make higher education completely free in the USA. It accounts for not quite 2% of GDP. The personal share, about 1% of GDP, is a third of the income of the richest 10,000 households in the U.S., or three months of Pentagon spending. It’s less than four months of what we waste on administrative costs by not having a single-payer health care finance system. But introduce such a proposal into an election campaign and you would be regarded as suicidally insane.

This lack of political will has resulted not only in making not free, but has made it more difficult to even pay for the loans necessary to get through college at the current rates (as he points out above, the increase in sticker price tuition is almost directly proportional to the decrease in state funding of higher ed.)  The result is that the US population is losing its lead among OECD countries (even among High School diploma attainment) and current generation of US citizens will be the first to have less (formal) education than their parents.

Like the prediction about cars (and likely, most of Watkins predictions) his prediction about higher education is not hobbled by technology (at least not only) or demand (at least not directly).  Much of this is, instead, the result of particular policies and funding priorities.  Still, it is impressive that, at the height of Robber Baron supremacy Watkins could imagine a society so publicly minded.  Perhaps we should take a cue from his positive outlook – and try to make it more likely (politically and socially) in the next century.

 

Meat Futures

So far, this blog is off to a great start, with some interesting themes emerging.  Bryan has introduced a variety of themes and methodologies.  I have to do some reading on the Dephi method and the process of extrapolation, scenario modeling, environmental scanning, etc.  And these Bruce Sterling videos look well worth the hour or so it would take to watch, provided I ever have another hour to sit and watch something.

Meanwhile, I am trying to shoehorn (or suss out) my interest in inserting more social distortion into the mix.  In this, I like John Robb’s tack of trying to talk about what these potential futures mean for individuals and small groups (mostly, it seems he thinks the best idea is to create stronger local communities to help sustain the change.)  I also like that he seems keen to point out the more chaotic potential of what Ortega y Gassett called The Revolt of the Masses.  In a response to one of Robb’s posts, which I posted on my other blog, I discussed the distinction between “meat” and “tech” in relation to many futures extrapolations I’ve heard.  I’d like to explore this as a distinction, maybe even a tag for future posts.

In the case in question, the topic (which both Robb and I have continued discussing, though not with one anther) was of the domestic use of drones by US law enforcement.  Robb predicted that there would be widespread use of drones by large cities such that something like Occupy Wall Street would be relatively impossible.  He made this prediction based on current trends, extrapolating like the best futurists from intelligently sourced research.  While my point was likely more long-winded than it needed to be, my basic argument was that this extrapolation was largely made from the perspective of the powerful and the technology was allowed to lead the trend.  New drone technology would lead to an imbalance between the sides such that the people on the ground – citizen protesters – would be more subject to the structure of power.  This scenario may very well come to pass, but part of Robb’s point was to alert people to this possibility in order to help them avoid it.  The social emphasis, I argued, should be more pronounced in these scenarios.

To make this distinction, I borrowed William Gibson’s description of the real, corporeal bodies of otherwise cyber-involved hackers as “meat.”  For Gibson’s protagonist, this meat is a sign of weakness – he begins the story with a permanently damaged nervous system which makes it impossible for him to jack into the internet.  In my mind, the meat of futures is not just people’s bodies – though there is that as well – but of the strength of people’s bodies and minds when they collectively join together, whether in planned and coordinated ways or in truly chaotic response to stimuli.  In many ways, this meat should be the key for all planners and futurists, but most of the time, people appear as pliable and predictable as the little avatars in a Sims game.  I understand the need to control for variables, but this seems to be the most important variable of all.

I say this, in part, because the meat variable seems to be newly revived.  Whether it is truly the awakening of the sleeping giant of the left, as Adbusters recently termed it, or it is just the latest iteration of what Karl Polanyi called the Double Movement, it seems clear that citizens around the world are quickly recognizing, to quote the Old Man, “they have nothing to lose but their chains.”  Here Polanyi was much more state oriented in his view of what even a collapse would look like.  For society to demand protection from something like the state, they have to have faith in the legitimacy of that state.

What we are seeing now is the build up to the legitimacy crisis (if it hasn’t already occurred) around political and economic institutions of every kind – elections, legislatures, courts, police, universities, banks, media. This will lead to outbursts as well as an everyday generalization of an acceptable form of what E.P. Thompson and his historian colleagues once termed “social crime:”  namely, things that are technically illegal, but which the local community deems moral and ethical based on the material circumstances.

This idea is already in the air.  Last week, the Deterritorial Support Group released a report, in the fashion of so many think tanks and futures groups called “Ten Growth Markets for Crisis,” almost all of which involve violating a law or social norm of some kind.  #9, for instance, is what they call Autoreductionism or “#proletarianshopping” which they describe as “the collective determination of commodity prices enforced by community solidarity.” An example of this would be,

Large groups of people arriving at their commuter station in the morning, forcing themselves through the turnstile; an organised troop of families fitting out their kids with winter coats, then demanding to see the manager for negotiations; a national campaign of underpayment on gas and electricity bills; in all instances normal people saying ‘we will pay what we can afford for the essentials of our life’– paying the ‘political price’ for goods and services. Why shouldn’t collective demand become a significant factor in establishing value?; if supermarkets can fix a cartel of suppliers, why can’t working class people fix a cartel of consumers? Why go shopping when you can go proletarian shopping?

In some cases, this is already happening. In Greece, for instance, activists are now illegally reconnecting people to water and electricity after they have been disconnected for non-payment. They do this because they now see the law (and the state) itself as illegitimate.  This is nothing new in the Planet of Slums, whether those slums are in nominally North or South, but it will become more prevalent in previously law abiding, middle class areas of the global north.  As austerity tightens the screws, the violence of the state will be more prevalently on display as the only way to retain the tenuous hold of the law.  As Malcolm Harris, one of the more radical thought leaders on #OWS recently put it, “the smart money is on entropy.”  Shortly after posting his blog on the topic, he passed along this story about the FDNY stepping in to defend the NYPD from rioters who were attacking the police. (It’s worth noting that the police were intervening to help a teen who was being threatened by the mob that then turned on the police.)

Over the weekend, I was listening to Doug Henwood’s interview with Frances Fox Piven and two things struck me about their discussion.  First was that Piven, a longtime community organizer and scholar of social movements, found it difficult to explain the rise of the Occupy movement.  Piven was singled out a few months ago by Glenn Beck as one of the movement’s major thought leaders, but both she and Henwood found this rather preposterous.  In her opinion it was rather mysterious.  Pointing to agitators was easy: there are always agitators, she said.  There have been organizations and leaders and leaderless movements for decades.  Their development and change is important to understand, but it is still rather difficult to predict when there will be an outburst of this kind – where the idea spreads with such speed and breadth.  She was very impressed and hopeful about the prospects of the movement – the second thing I noticed about both this and other conversations in general around Occupy.  It has significantly shifted many of the discussions in the country, though it seems to be having little effect on the legislative and executive initiatives at the federal level.  In other words, this is something and it will likely not go away, if only as an organizing meme.

Piven warned against defeatism (or, from the perspective of elites, becoming too comfortable) reminding listeners that the Civil Rights movement also moved in stops and starts, flaring up from time to time until its pressure became too much to contain.  Whether the comparison is accurate, it is important to remember that the Civil Rights movement (along with the Anti-War movement) were not isolated, national events.  Like the seeds of this occupy movement, it was international in its inspiration.  During the Arab Spring, the Wisconsin statehouse was occupied and solidarity flowed both ways.

Robb, in his post on the Citibank memo on the Eurocrisis, indirectly notes its lacunae about the increased social instability this will cause.  This is hardly surprising from the company that brought us the Plutonomy Files, but it is just as important to remember that there is no necessary relation between this instability and any particular ideology.  Here I agree with Polanyi that the energies this creates open the door for various forms of fascism to step into the fray – from the more benevolent dictatorship of FDR to the nasty German iteration (both of which, contrary to Robb’s dismissal of electoral politics, were originally secured by the ballot box).

It is also making me rethink my opposition to one of the more economic determinist arguments of the late (and Late) Andre Gunder Frank.  In his book ReOrient, among other things, he discusses the political impact of global recessions.  I’m paraphrasing but the general drift of his argument was that the ideas behind the American and French Revolutions were not all that important to creating the movements for change in those countries.  Instead, these revolutions were largely an unplanned, uncoordinated outburst to worsening global economic circumstances. When I originally read this I found it laughable – humanist that I am I remain enticed by the enlightenment ideas behind those movements.  But when you look around the world at the burgeoning storms of social distortion, there is something to say for the argument that, whatever the ideas rattling around in that grey matter at the top, meat will  eventually respond to economic and political hardship, often in unpredictable ways.

As we develop our futures planning initiative, I hope to develop this dimension of our research more (which is not to say that my partner in [social] crime isn’t already with me on this.)