The future of intellectuals

Cracked.com is the quintessential internet trivia time suck. They produce witty, often intelligent annotated lists of historical and pop-culture references, spread out over several pages to increase the click throughs and ad revenue. It’s, fun but you know who’s cud you’re chewing.

But they usually curate some interesting digital objects. Back in June they released the list of “The 5 Most Ridiculous Pop Culture Predictions That Came True.” Social media being what it is, I just saw it today in one or another of the feeds I was monitoring. Two of them are incidental (a 1987 sitcom predicted the death of Gaddhafi almost to the day; and the chilling cover and title of OJ’s 2007 book, If I did it, was almost exactly produced in a 1999 segment of Chris Rock’s comedy show.) And back in January of 2012 we talked about their #1 choice (the predictions made by Ladies Home Journal in the early 1900s of the century hence.)

The other two are rather striking. #2 is by former NITLE fellow Alan Kay and involves his accurate 1982 predictions (and drawings) of the world of the future – which includes iPads, WiFi, computer terminals on the backs of airplane seats, digital museum guides for children, and the ability to take a course online while you sit in a bar. The latter two are still in development, but most of the others have pretty much come to pass. Kay is a smart guy and would say that the fact is predictions came true has less to do with his position as a sage: he’s actively helped to make that future. What they really show is our own lack of really imaginative visions of the future, most of which involve social rather than technological innovations.

This is nowhere more evident than in the #5 choice Cracked provides. It centers on a prognostic piece of promotional video for the created by Apple in 1987. The video imagines something called the “Knowledge Navigator,” which isn’t all that different than Vannavar Bush’s Memex or the librarian in Stephenson’s Snow Crash.  Cracked sees the video as accurately predicting the 2011 release of the Siri enhanced iPhone 4S, which “was released about two-and-a-half weeks after this video was to take place.”  It is an impressive video precisely because it is so different from the hip, quick-cut, music filled advertisement for Apple products. Instead it shows a university professor – a scientist who studies deforestation – engaged in an extensive conversation with a personal digital assistant embedded in a tablet computer on his desk.

The assistant keeps his schedule, reminds him of things he needs to pick up, receives his calls and then helps him create, over his lunch break, an advanced research project and presentation with a colleague.  The assistant helps him (and eventually his colleague, who joins them via video chat) look up scholarly journal articles, summarize their content, and even clip and manipulate data for a presentation: all using conversational verbal requests. The colleague eventually agrees to virtually stop by to supplement a lecture he was scheduled to give but has failed to prepare. It is pretty slick and in some ways exactly replicates the environment in which I do a lot of my work – though in my case Google plays a bigger role than Apple. Hangouts, search, and sharing documents are all a bigger deal than the fact that I can do it on my tablet instead of my laptop. 

As in the Keys example, Apple hardly took a back seat while this future unfolded (though Siri was actually developed by the Stanford Research Institute and then acquired by Apple in 2010). Still, this is clearly a moment where, as Raymond Williams might have put it, we can see the intention behind technological development, the social function it was meant to serve. While there are parallels between the product, what is probably most interesting is the aspect that remains impossible: the fantasy that computers will soon be capable of scouring vast reserves of information and produce knowledge based on that information.

Siri is pretty good at making appointments and sending emails (some of the functions the Knowledge Navigator.) Given this, Cracked.com is somewhat justified in finding this prediction true. However, when it comes to the kind of advanced searching, summarizing and synthesis demonstrated by the Apple product – the stuff that is actually interesting and visionary – we are still a long way off. 

As Dave Weinberger outlines in his 2012 book Too Big To Know (and many others have described before) the Data-Information-Knowledge-Wisdom hierarchy describes an increasingly refined process whereby messy pools of data are transformed into knowledge and then wisdom (presumedly through a better and better computer program.) While the video doesn’t give us any insight into the back-end function of the Knowledge Navigator, it appears to work like every other utopian knowledge assistant – namely as a seamless and comprehensively informed artificial intelligence able to scour journal articles and immediately provide not only a summary of the knowledge they contain, but to link them to other works.  As Weinberger complained in his book (and this excerpt): 

But knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.

 

The real problem with the DIKW pyramid is that it’s a pyramid. The image that knowledge (much less wisdom) results from applying finer-grained filters at each level, paints the wrong picture. That view is natural to the Information Age which has been all about filtering noise, reducing the flow to what is clean, clear and manageable. Knowledge is more creative, messier, harder won, and far more discontinuous.

In this regard, what is, perhaps most significant about the Knowledge Navigator and the tech fueled dreams it represents is the role of the intellectual in this environment. We don’t know what the good professor might have been doing before this moment, but it is rather telling that he has evidently only passingly kept up with his field of expertise. He’s due to give a lecture this afternoon, but only dimly recalls an article that came out months ago he meant to read, but hasn’t yet.  He is the pinnacle of the pyramid, waiting till the last minute to sort the mechanized knowledge produced by his machine (and female colleagues) and to instantly generate wisdom so compelling and obvious it can easily be demonstrated with visual animations. 

One can imagine the framework which Knowledge Navigator would work as this vision would hope, but it would rely on a vast network of other researchers who make their studies available in vast openly searchable repositories – repositories that also help curate the most important research from the past few months. Summaries might be written into the abstracts (this process is highly uneven among humans, but perhaps they would get better at it if they thought they were writing for a machine) and it might be easy to quickly surf towards the clearly truthful statements, as opposed to the usually messy and partial conclusions of the average scientific paper (much less  a paper in the social sciences, arts, or humanities.) Bracketing the fact that knowledge changes, that most questions are not so easily answered at this level of inquiry, and that most conclusions not so easily communicated, it seems a worthwhile machine to have.  Keep in mind, this is beyond the kind of Watson machine that is able to answer trivia questions on TV. This is a machine that can instantly read and summarize entire academic articles. Or so it appears.

But the most likely way for something like this to work would be through harnessing the collective wisdom of the intellectual crowd rather than expecting you can program a computer to think at the level of an advanced professor in the sciences. This, however, would presume there is still an intellectual crowd to help produce this knowledge cum wisdom. There is no hint of this being one of the mechanisms that would make the navigator function, but as scholars are increasingly realizing, open platforms make for faster feedback loops and a social (intellectual) filtering that supercedes anything an individual AI can produce, at least at the current moment. Go to Google Scholar and search for “deforestation.” Even now, in the fifth month of 2013, there are already over 5,000 citations for the term, and the search engine has no way of filtering them by discipline, other than equally baggy and imprecise methods like combining deforestation with other keywords.) On the other hand, one can easily see how a social media network like Academia.edu, if it really caught on and academics used it, could become a pretty decent sorting mechanism. And, of course, there is the time tested insularity of knowledge production, fully evident in the video: keeping up with your close colleagues work and becoming colleagues with the people who’s work is respected (otherwise known as good old fashioned networking.) Brewster Kahl at the Internet Archive might have set out to create a massive collection of data for the purposes of educating AI machines at some distant point in the future, but in the meantime it is likely far more useful to have a well organized archive that can be used by actual humans. 

If we think of it in social and cultural terms, then we might actually be very close to having a Knowledge Navigator. Using technology to link large scale projects, like those being harnessed together under CLIR’s Committee on Coherence at Scale, will amplify this social networking power even further. Legally even this level of technical integration is problematic. The current academic knowledge infrastructure consists of elaborately walled gardens which hamper the searches of many academics (particularly those at smaller institutions).  Getting an article – to summarize or otherwise – often requires several steps of annoying human intervention (and usually a fee!)  And economically, we are at a moment where MOOCs and virtual charter schools threaten to be the latest nail in the coffin of the US system of education – where education is so mechanical it can be performed via standardized tests and the number of professors necessary to deliver this content is significantly reduced.

And here, it seems, the desire for a Knowledge Navigator intersects with the broader crisis we are facing in higher education at large. The question of MOOCs is ultimately about how many intellectuals we need to produce higher education and specialized disciplinary research. If we only need a handful to produce the courses that will teach the masses, perhaps we can finally let all those adjuncts go: they can finally work full time at the coffee shop where they moonlight, and in their spare time they can contract with The Georgia Institute of Technology to be one of the few humans who grade the work of the paying students at the MOOC; or they can contribute to the knowledge base on which the Navigator will work. A handful of actually paid professors, sitting in their elite, woodgrain finished offices, will pause from their daily activities (picking up family from the airport and a cake for the birthday party tonight) to briefly consider the knowledge produced for them by the unpaid underclass and highly developed algorithmic machines, then deliver their wisdom via MOOC before they rush off to fulfill their social commitments.  

Who this vision of the future enhances and who it exploits will likely be one of the key struggles of the coming decade. It is hard to parse what people like Bill Gates think will come of the MOOC Moment, but clearly he would prefer we think of education as something that happens on the Surface of one of Microsoft’s tablets, rather than in the interactions that take place in the physical classroom.  At the very least, he would prefer completely disorganized, entrepreneurial teachers whose existence is, at every moment, contingent on meeting the standards set out by one or another of the for-profit companies administering our testing regime – first at the K-12 level, and now, with Pearson administering the multiple choice tests for the Georgia Institute of Tech MOOCs, in higher ed as well. 

That may indeed be the future, but, as Gerry Caravan aptly put it back in February, “MOOCs are how you’d structure higher education if you believed there were no future.”  By this, he means that, laying off the majority of your faculty (or, as in the case of Georgia Institute of Technology, hiring only 8 instructors to oversee 20,000 students) may be economically sustainable (if the only thing that matters is a commodity degree of questionable quality spit out at the end of the program); but it is likely not institutionally sustainable. Today we have well-trained Computer Science Professors who can develop the program based on today’s knowledge; and certainly there are deep pools of exploitable, underemployed adjuncts you can draw upon – adjuncts who took on debt and improved themselves through higher education because, at that point, there still held out some promise of a career replacing those Computer Science Professors at the top – both as teachers and researchers.  But who will go to graduate school in the next decade if their only hope is to become one of the handful of instructors necessary to run a MOOC? 

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century?

These are the questions the smooth surface of the Knowledge Navigator cannot answer.  They are questions which we used to have answers to based on processes of reproduction and production that have worked fairly well for hundreds of years. There has undoubtedly been an overproduction of intellectuals on some level – if we look strictly at the employment numbers. But it seems a very cynical view of the future to think we won’t need highly trained people to do work we don’t yet understand. It appears from the BLS that this is what employers are predicting. This seems to point to the need for a more public employment of intellectual labor – akin to the public expansion of higher education in the mid-century. It doesn’t take a very wide environmental scan to realize that kind of project is not in the offing anytime soon. 

Right now it appears we are once again putting our faith in gadgetry – in part because the sale of gadgetry is far more profitable than the minimal margins on service work like actually educating people. There are few Alan Kays in the world who can imagine the beneficial uses for technology when it is embedded in a robust social context that values education and learning. And since all of us must now ensure that our contributions to the social totality are measurable in dollars, it is likely that those who already possess that capital will be ahead of the curve.

Advertisements

On the future, MOOCs, tenure, etc.

Last week I spoke on MOOCs in an online seminar with faculty and staff of about 40 different schools. The consensus among that group seemed to be that developing in-house online programs would be to their benefit as institutions. In other words, many of them are looking to create some form of digital teaching program in order to have a version of that product in the case that it gets higher demand from students (or parents.) Many of them are also excited by the pedagogical idea of using digital platforms differently.

This Rutgers University statement on audio/visual recording is very interesting.  In part, it is admirable in that it finds compelling legal and pedagogical issues to recommend against allowing widespread recording (and sharing via social media) course materials or classroom activities. Namely it focuses on the copyright issues that might arise if, for instance, Youtube becomes a popular place for students (or faculty) to post lecture videos or even student discussions. On the one hand, the classroom is seen as a protected space in the copyright code. As I point out below, the recent GSU lawsuit provides a case in point. On the other hand, taping conversations that go on in a classroom, and making them public, has significant pedagogical consequences. Even in the best of scenarios – where students welcome being taped, raising no privacy concerns – we risk having them act as if every meeting was a small episode of Big Brother, where the watching only makes them perform more fully for the camera, rather than engaging in the risky, personal reflection that leads to real learning. The point made by the Rutgers faculty is crucial – public conversations are not always as productive, especially for students who are just trying to figure out what they believe and what they want to learn. For them, the privacy of the classroom, and instructors’ responsibility in the conversation are essential elements for successful, critical pedagogy. Rethinking copyright and privacy seems essential for how we move forward in the open, online environment.  But so is understanding how we value our own teaching and research – and how we expect others to value it.

In this, it is hard not to read this statement in the context of the broader conversation about “disruptive innovation” and venture capital’s version of rethinking the classroom for a digital age.  This discourse is seductive because it is based on a certain promise: MOOCs and even lecture capture could be very useful for generating conversations beyond our individual classrooms, perhaps drawing our students into that broader conversation through the dynamic forums Standford, Harvard, and MIT have created in their MOOC platforms. So it is possible that educators could reach and interact with far more students than they do now – to the benefit of both themselves and their students. This, in turn, sounds very good to the U.S. education department as it is underfunded and under fire.

To them, the learning possibilities are less important than the economics of scale. MOOCs seem important to business minded administrators (and their newspaper editorial gurus e.g. Thomas Friedman) because they seem to solve the entire cost problem in education in one swoop: it is made technologically glamourous, infinitely productive, highly demanded, and incredibly cheap. It has never been a better time to be an educational entrepreneur because everything is on the table. Every other industry has had active intelligent laborers replaced with machines that could standardize their human actions, turning our jobs over to robots (or at least threatening to): we can mechanize factories to lower the labor costs of cars, why is it taking so long to do away with these pesky professors? As Andrew DelBanco recently quoted Richard Vedder, “With the possible exception of prostitution . . . teaching is the only profession that has had no productivity advance in the 2,400 years since Socrates.”

For them, the promise of MOOCs upsets much of the present infrastructure of education, especially the tenure system itself, which is the precarious, but fundamental subsidy of the entire present system. Lost in the conversations about both academic publishing and the arrival of MOOCs is the way the present infrastructure was made possible by these broader subsidies. It is true that an increasingly smaller number of faculty are put on the tenure track, but every year for the better part of the last two decades, we have turned out a fresh new crop of people hoping to secure a place in that realm, working relentlessly for free with the brass ring of tenured employment dangling ever more remotely in front of them. Those that don’t succeed at first, continue teaching as adjuncts, exploiting themselves for the larger goal of educating the next generation of Americans.

Even leaving aside the idea that shared governance might actually be more efficient (and therefore tenure a better political economic model on which to base the university) the security and responsibility of tenure is a priceless motivation of the higher education system. Removing it will create untold havoc in the intellectual economy that serves as the foundation of MOOCs. Many people have pointed out that MIT, Harvard and other elite schools are basically using their own well established brands and resources, currently with little hope of turning a profit on these activities. Already there are signs of people deciding not to go back to school, with many graduate programs reporting lower enrollments and law schools in unprecedented decline. How will MOOCs (or education in general) function without tenure as a subsidy? How will academic publishers continue to produce knowledge? The boycotts of Elsevier and others are small potatoes compared to the decimation of the free academic labor pool caused by truly unleashing market forces on higher education. Gerry Caravan had a different way of phrasing this, which got some attention back in February:

http://gerrycanavan.wordpress.com/2013/02/18/some-preliminary-theses-on-moocs/

The whole post is worth reading, but this part stuck out to me especially:

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century

Related to MOOCs is the struggle over scholarly communication more generally. Academic fair use and the ability of libraries to create digital archives are under direct attack by the academic publishing industry. But it is not that industry alone. Take the recent lawsuit brought by Sage, the University of Oxford and several other major academic publishers – publishers who make their money off the very labor we all do, virtually for free, but really subsidized by the tenure system. Because a condition of our job is that we publish, we do so with little expectation of direct economic gain from these activities. These publishers sued librarians and faculty – as individuals – for the policies they had in place around online course reserves. The library provided some digital course reserves and faculty also provided some digital copies of articles and book chapters through learning management systems (LMS) – like the MOODLE platform many of us use to provide course materials. Since Georgia State University – as a public institution – has sovereign immunity in these cases, the publishers instead sued the individual administrators within the system, holding them liable for all of the activities around sharing

Publishers accused the schools of making greater use of their materials than was allowed by copyright law. But it is really about the future of academic publishers in a world where we can share digital materials far more seamlessly than ever before. And if we can be sued for sharing academic materials, what is to stop us from getting sued if a student videotapes us showing a movie clip and it becomes popular enough to attract attention? The judge in the case found that only 5% of the cases the publishers cited would fall out of even a conservative definition of fair use, and thus that faculty and librarians are largely on the right side of the law. Yet the publishers have said they were going to appeal the case, not to protect their own economic interests, but on the advice of the authors of the texts in question, who would like your students to pay their residuals check every time you have them use that article (thank you very much.)  In short, those of us in academia and academic publishing are living in a den of vipers, in which each side is willing to strike the other down for the few remaining morsels of public higher education funding.

If we can’t put course reserves on library websites, how will students not have to pay for them on MOOCs? And what does that mean in terms of it being “Open?”  In some cases, getting students to pay for the text is the primary goal of the MOOC : The economics professor at U.C. Irvine who was removed as instructor from his own MOOC, in part because he was insisting that, to participate in the MOOC, the 40,000 or so students would need to buy his $90 economics textbook. The flipside of this is what the AAUP has often feared will happen: once our lectures can be recorded, once our MOOCs created, what is to stop universities from repackaging them, assuming them to be “works for hire,” leaving faculty no claims to copyright in their teaching materials.

MOOCs might present some interesting possibilities for the students that will soon be graduating into our universities and colleges – and those who, while qualified, will not be able to afford a formal education because support for the public system is itself under fire. Since so much of their conversation with their peers will be mediated by some form of audio-visual artifact, it is intriguing to think about what the circulation of our classroom discussions might mean. It is difficult to chart a path through this landscape that doesn’t ultimately lead to a cliff on the other side. Luckily my colleagues – like Bryan Alexander – are on the case.

I’m not sure if the Rutgers Senate statement strikes the right balance, but at least it errs on the side of giving students and faculty control over how recordings will be used. Now they just need a statement of imagination outlining all the possible ways it could help enhance learning if recording were allowed.

App.net and the future of the public social network

I’ve just watched the launch video for the product/project App.net, which is another social media network.  This one, has the attractive distinction of charging a fee for use. The argument is that, by charging a fee, they will be subject to their users alone rather than making users one step removed in the process.  The key claim (which is similar to *diapsora) is that “We will never sell your personal data, content, feed, interests, clicks, or anything else to advertisers. We promise.” This leads them to align all the interests of developing the platform to designing for users rather than advertisers or marketers.

Aside from the rather uninspiring pitch given by their founder, I fully agree with their argument about the problems of commercial-supported culture in general.  Despite the apparent belief that Facebook’s data-driven, “the product is you” approach to social media is the first occurrence of this model, the argument of the audience commodity was made in the early 70s by Dallas Smythe about commercial radio and TV. His article (which I can’t link to directly because now it is a commodity of some publisher and therefore unavailable online) was pitched at critiquing those in the “Western Marxist” tradition – mostly of Cultural Studies and but also Critical Theory – who focused too closely on the ideological role the media played in securing legitimation for the status quo.  For him, a more useful argument was  found if we looked at the commercial media as an industrial force in and of itself.

From this perspective, the question pertinent to advertising supported commercial television and radio was: what are they producing? The obvious first answer is: they are producing media.  Yet when you look at how they made their money, it the production of media actually appeared as more of an expense.  A former head of the FCC (from 1943-48), Smythe knew that when NBC or CBS “sold” their product, their primary consumer was the advertising industry, who were ultimately paying the bills. Instead of producing news or entertainment for the viewer or consumer of news directly, they were producing media, to attract audiences which could then be sold to advertisers.  Of course people in the industry had already known this for many years.  But sometimes the social science community needs to be reminded that their objects of inquiry exist in both the abstract and the concrete.  In this case, instead of recognizing the true production process, Smythe declared that ideological critiques of the culture industry focus only on “the free lunch,” which seems an obvious nod to the predominant neoliberal thinker of his age, Milton Friedman.

In addition, in a perverse form of exploitation, even though the advertiser supported, commercial media corporation may be paying people to produce this free lunch, they were not actually producing the commodity that was sold to advertisers.  Instead, it was the viewers themselves who did the work of watching the advertisements, trading their liesure time for the commodity culture in between.  Smythe found this problematic since, among other things, it meant an effective increase in the working day, well into the time we were supposed to be  resting. Only instead of getting paid, we were merely generating profits for the owners of commercial media in exchange for their production our common popular culture.

Most inquiries into Smythe seem to end here – especially those which try to apply Smythe to the current environment of social media, which is obviously very relevant.  But this misses Smythe’s synthesis which points out that, while the primary role of the media itself is not ideological, it serves that purpose nonetheless. We must be prepared for this form of liesure-consumer-labor through intensive socialization into commodity culture more generally.  In a later essay, in his 1981 collection, Dependency Road, Smythe provides concrete examples of the free labor we provide in our preparation for this dimension of the capitalist economy.  In going to the store with our parents, for instance, we learn how to think about (and to think about thinking about) brands and the other practices of commodity consumption. This creates a baseline of self-education that advertisers, marketers, and the advertiser-supported commercial media itself can all expect consumers on the other end to possess.  And this becomes an ideological support in the full meaning of Althusser’s ideology – it is a practical, material effect which helps reproduce the relations of production.

Smythe doesn’t really supply a solution to this problem, but if the problem is commodification, then one solution is to create a public institution of some sort in order to obviate the commodification of that culture: a socially controlled media that therefore serves society because they are both its controllers and primary consumers. In other words, it recognizes the political, cultural, and economic dimensions of the role of the media and attempts to solve it as an overdetermined totality of contradictions.  As jargony as that might sound, it is actually a worthwhile strategy that tries to take the multiple variables, power relations, technological possibilities, and the historical development of these within the economic landscape of the social formation.  It therefore recognizes that flipping one variable will never be enough – in this case, shifting the costs from the advertiser to the direct consumer is a silly way to run a social network – or, more accurately, it is an utterly conventional way to run one.

Although App.net provides a solution, it is based on a libertarian interpretation of the problem. This, predictably, creates a solution which overlooks the fundamentally social nature of communication.  (In a more business sense it is reinstating the model of internet communication on which AOL was based, hence its conventionality.) More accurately, it elides the social problem in its totality and castes it as an economic problem alone.  They are not alone in their approach – one could argue that peer reviewed publications (Peer J comes to mind) have adopted the same answer to their problem: but this piecemeal approach, pragmatic as it may be under the circumstances, is frustratingly disconnected from any overarching vision, limiting its understanding of the most immediate problem it aims to solve (i.e. to prevent social networks from data harvesting in the first case and overturn all of scholarly publishing in another).  What is charming about these endeavors is that they seem to sincerely believe this piecemeal approach will somehow transform the environment at large in a positive direction – or at least respond to an emergent market demand.  It’s all Pop (economics) and Popper (science).

In terms of social media, App.net claims to be responding to a demand (or, more accurately, a significant damper to potentially greater demand):

Many people have become so cynical about user-hostile, privacy-violating social services that they refuse to participate at all. We can understand why. Earning your trust is the most important thing we can do. It won’t be easy, and we will make some mistakes, but we will do our best to be honest and transparent.
Therefore, their solution is to retain the commodity form of the media, but to pass along the costs of that media to the users themselves.  So it is not a tension between commercial and non-commercial communication, but advertiser-supported commercial culture and that supported by user fees.

In purely pragmatic terms, if  the network consists of only users who have paid a fee, then that significantly cuts into your ability not only to network, but to distribute information.  What this creates is a niche market of social media consumers: “we are the people who have the principles and the financial capacity to demand a social network that is safe and private.” This is fine if we don’t take social to mean public in any way. And in that case, it makes some sense, and might be a useful solution except that it relies solely on the market for its feedback and therefore its ability to respond to the problems users see in the platform.  Albert O. Hirschman (who I’ve discussed before on my other blog in relation to education reform) argues this “taut economy” rationality is a problematic starting point precisely because it allows for the most vocal and “quality conscious” consumers to simply select a different product rather than attempting to improve the one that appears to be failing them.  App.net claim that they have enough of a financial cushion to do this with some regularity, but they are also hoping to create a monopoly of sorts (in this case a walled garden where you can have a society free from advertising supported capitalism – and participate in the pure market activity of purchasing a service from a provider); on this point they might be onto something as that is exactly the condition (along with loyalty, obviously another social media favorite) in which Hirschman claims the recuperative politics of voice are energized.

So we can divide this situation into three dimensions: the cultural, the economic, and the political.  I will tackle each of them in turn, but first we should recognize the ecosystem of social media as it currently exists. On the one hand, it is arguably directly related to the ecosystem and economy of the advertiser driven commercial media of the past (and present). For instance, despite the fact that Clay Shirky seems completely oblivious to Smythe, the amazingly productive “cognitive surplus” he discovers once people stopped watching TV is clearly related to the fact that, before they were working for Wikipedia, they were working for The National Broadcasting Corporation (of course, now that NBC is owned by an internet service provider, Comcast, the distinction is increasingly difficult).  In any case, the implication is that people are creating content more directly for themselves – but increasingly they do this on web 2.0 platforms set up to capitalize on this social energy.

For one, as my colleague Rob Gehl has pointed out, the web 2.0 articulation of the von Neumann architecture clearly creates a division between the users as the “affective processors” of information and the social media platform itself as the owner of the archive those users valorize through their labor. These platforms ultimately make money (or, in most cases, hope to make a lot of money) on the data collected and information processed by people on them.   More importantly, they hope to develop profiles of the kinds of engagements people take part in, the kinds of connections they make, and the way information travels throughout social networks. In other words, they are able to engage in a highly sophisticated version of Administrative Communication research so that they can better target advertising messages.  This is especially true of Facebook, though it remains to be seen whether this is something that will actually make them the money they believe they can make.  In other cases- such as Youtube – the traditional advertiser support remains, but there is a broader range of producers remunerated for the media created.  In any case, it is clear that the platform owners have a range of interests competing for their attention, and users are ultimately only an indirect consumer, at least according to Smythe.  App.net, hopes to short circuit this by making a product only for the users.

But even at one step removed, current “commercial” social media platforms must serve the users.  And in many cases, they are far more sensitive to users than even broadcast media.  Whereas the only index of consumer behavior NBC has to deal with is the number of viewers (and, perhaps the problems of DVRs and channel surfing), the very product Facebook or Google hopes to sell – consumer data – requires almost constant indexing of just how much participation, interaction, and attention they are selling.  In other words, it must be good enough to fulfill a basic social need and the increasing demands and expectations of the public it is serving. Thus, it is particularly sensitive to both the voice and exit function

And, on the other hand, because it is increasingly the only game in the social network business, with upwards of 850 million users, it also makes Facebook particularly sensitive to appearing as if it is an advertiser free-for-all.  If users are turned off or no longer find it a useful part of their life, they will abandon it with the speed of Friendster, MySpace, and all the previous social media leaders.  App.net doesn’t appear to have this problem: according to one watcher, although there are about 20k users, about 50% of the posts were generated by only 250 of the people involved.

This makes the platform less value as a venue for “social media” since, well, you can’t really share with all that large a society.  This social problem becomes and economic one at some point, as it many people predict it will for Facebook itself – there are likely significant plateaus in the number of users you can get.  Organizations like PeerJ and App.net, who rely only on user fees will experience this plateau in users much the way a bottom falls out of a pyramid scheme: once you’ve collected all the fees from everyone, how do you pay for sustained improvements on infrastructure or the ability to move onto new communication devices thereby retaining it as a valuable platform for social interaction. And, of course, currency inflation will always force them to charge latecomers to the party more.

The alternative is to begin thinking of Facebook as something other than an advertiser supported social media platform and more, as danah boyd long ago suggested, of a public utility. She makes a lot of the same arguments I do above about the way monopolies are forced to listen to their users in order to stay in business.  The only difference is that this public utility was born privatized and we may have a steeper climb to regulate it in the interest of the public. This suggests that, instead of joining yet another social media platform, you might be better off using the one you already have to organize people around it – and create better rules that are more in your interest. If anyone is successful in doing this, make sure to keep good notes: we evidently need a lot of training in this to fix our public infrastructure, public schools, public libraries, public universities, and, of course, our public servants.  As Hirschman might have said, democracy is messy business: but it is the only way we’re ever going to change things.

It is also worth noting that, like the administrative research of the past – where people like Gallup and Roper were intimately involved in trying to use the same techniques to produce public opinion polls that would inform government decisions – Facebook’s data gathering could serve much more noble ends.  This is highlighted by recent New York Times article on Target’s nascent ability to tell when a teenage customer was pregnant before her father even discovered it, based solely on her buying patterns (a coup only possible because Target served as one of her primary shopping destinations.) The Times highlights the wonderful, insightful social skills that the mathematicians were able to apply to making this amazing discovery even as it points to the utterly banal use the corporation made with this information (i.e. it was able to send a targeted coupon flier, meant to cement her buying habits by encouraging her to think of Target as the place you shop for stuff for your baby.)
In a very different context, Doug Henwood recently interviewed the Greek economist Yanis Varoufakis about his new job at the Valve Software video game design firm. Among the other things Yanis notes in the course of the interview, he wonders at the amazing laboratory of human interaction the games’s market interactions provide. He ultimately notes how the process of regulation is implicit in how these interactions occur – and the data about them are used to shape future interactions. As we refine these kinds of tools, it seems clear that some of what Facebook and Google and Twitter are producing through the aggregation of data about us and our personal interactions could become one of the greatest experiments of social science ever known.  Perhaps App.net hopes to do this in a non-evil way, but as Google long ago learned, defining evil is tricky business: today’s totally normal can be tomorrow’s invitation to invoke Godwin’s law.

Bad Faith: Naomi Schaefer Riley and the War on Public Education

Bad Faith: Naomi Schaefer Riley and the War on Public Education.

The above post makes a decent argument about the dustup over Naomi Schaefer Riley’s blog entry on African American Studies programs.  The argument is that it fits alongside the parries of Santorum over higher education – each being a part of a war on public higher education education that is ramping up in earnest.   I don’t know how much worse this coming war could make it for institutions of higher learning (or if it is better to see these as rearguard buttresses against increasing public funding if we ever see an economic recovery.) But either way it seems like some kind of data point in the political atmosphere around higher ed.

Wired: short takes on futuring techniques

Yesterday, Wired had a small feature which provided short descriptions of how “visionaries” spot the future.  Nothing groundbreaking, but I think Tim O’Reilly might have one of the more popular right now: he recommends coolhunting.

I don’t really think I spot the future; I spot the things in the present that tell us something about the future. I look for interesting people. I find the cool kids and then say, what are they doing?

VC attorney Chris Sacca agrees, saying people on Twitter do his due diligence on products worthy of investment and watching people shop in Best Buy is a good indicator of the market.  Peter Shwartz also agrees with the idea of watching people, only he looks at what the “smart kids” are doing.

Watch where scientific talent is heading. Science advances in part by attracting talented people. So if an area is attracting great talent and money from governments and companies, you can expect to see important change

On the other side, many of these visionaries ascribe to the idea that innovation has more to do with getting some of those smart people yourself and doing something daring with them.  MIT media lab director  Joi Ito says, “Agility is essential,” mouthing the other mantra of post-Fordist flexibility, but in the end this flexibility is best understood as having the capacity to react – meaning you have to have some good people who are ready for anything. As Vint Cerf puts it, “Some things get invented because it is suddenly possible to invent them.”  The passive voice elides the fact that this invention happens because there are smart people who are able to spend the time inventing them – and institutions to support and deliver that invention on a broader scale.

How many of these assumptions, then, are based on a precarious hangover from the disintegration of the Fordist model?  It could be argued that Post-Fordist capitalism has basically relied on a exploiting the capacities, infrastructures, and institutions of that earlier era, without necessarily replenishing the pond for the next round of innovation.  As we pull back the riens on public support for science, education, and infrastructure, who will be the cool kids and will there be any smart kids in any position to invent those amazing new things?  Or are these material circumstances already built into these assumptions?

Economic Complexity Theory

I have been meaning to finish a long post on the implications I see in economic complexity theories coming out of MIT.  The basic theory is that economies are best thought of not in large terms of GDP or even exports per se, but in terms of the capabilities exports represent.  These capabilities can then be thought of as the raw materials for future exports and economic activities.  As a short NYTimes piece last spring described it:

Hidalgo and Hausmann think of economies as collections of “capabilities” that can be combined in different ways like an Erector set to produce different products. An Internet retailer, for instance, cannot function without some kind of electronic-payment network. It also needs a working system of postal addresses — not every country has one — as well as reliable mail. Because these capabilities cannot be easily identified and observed, Hausmann and Hidalgo track the silhouettes that the capabilities cast upon trade statistics. If a product is a significant part of a country’s exports, it offers evidence that the country has certain kinds of related capabilities.

The sticking point is what those mean.  In general they seem to break it down according to a distinction relevant to the statistical physicist (Hidalgo) behind the endeavor: capabilities are related to economic development as potential is related to kinetic energy.  If a country has a lot of capabilities and a low per capita GDP, then they are presumed to have high potential energy, waiting to explode into GDP growth.  On the other hand, if a country has high capabilities and high per capita GDP, its already kinetic activity has little upward potential (outside of discovering cold fusion).

As with most attempts to equate economics with physics, this one has all sorts of cultural, political and social blindspots, but it is an interesting set of abstractions that may be fun to play with.  The scholars at MIT have written a book – free on their website, of course – outlining this theory.  But real fun is in playing with their data on the MIT Media Lab website.

Experiments like this seem to illustrate just how difficult it is to predict future growth.  But they might also offer some insight into how it might be generated.  It would seem the best idea would be to help people within your country develop tacit knowledge and deep competencies and capabilities.  If only we had some sort of institutions where these kinds of things could be developed – some sort of publicly supported infrastructure of higher education. Hmmmm.

Facebook: dot.com bubble, 2.0

As Facebook prepares for their IPO, Business Week points out that they have a very meagre profit profile.

There’s no question Facebook is huge—possibly the largest digital-only social enterprise that has ever existed—and it’s still growing at a fairly rapid rate. Just a few months ago it crossed the 800-million-user mark, and it has now passed 900 million, which suggests it will probably rack up a billion active users sometime later this year. And more than half of that user base visits the site at least once a day, a level of engagement other Web services would likely kill for. [. . . .] Facebook’s problem is somewhat different: It has nearly a billion active users, but it makes a remarkably tiny amount from each one—about $5 per year. That’s not a lot, considering over half of those users visit every day. And while the amount Facebook makes from the average user rose in the most recent quarter, it grew just 6 percent. Some of the marketing costs the company is racking up are no doubt increasing that number. But how much more can Facebook squeeze out of its existing user base?

Much of the focus on the company is on its incredible number of dedicated users – which should be Web 2.0 manna. But unless the goal is to have all 6-7 billion people on the planet on Facebook, there is only so much further they can grow in that direction.  The question is how much of that dedicated user base is due to the relatively uncommercial skin of the platform?  As they try to monetize it with more ads or deeper data collection and sharing, will people run away?

This is a very specific question about Facebook, but it speaks to the general anxiety of our age: as culture can be more freely distributed and shared – arts, music, movies, education, and news – how do we develop the structures to support those activities in a sustainable way?  In this sense, there is a very fine line between micro-level hypercapitalism and no capitalism at all, at least in terms of supporting cultural production through its commodification.  We may soon have to answer some difficult questions about what kind of work we would like to support and foster as a society – and how.

Meanwhile, please share this post on Facebook.  We don’t get any money from the site, but it is always gratifying to see those pageview stats.

Futurists vs. Spies

Sigh.

I like the idea of Wikileaks – and I believe Manning is being held illegally – and am usually at least amused, if not excited, by the antics of Anonymous.  But it really sounds like their most recent cache of hacked emails is a desperate ploy to keep Assange and his associates on the media radar.  Ironically, they seem to have done so by taking seriously a firm that had been trying for some time to get on that very same radar.  As Max Fisher writing in The Atlantic put it earlier today,

Stratfor is not the shadow-CIA that Wikileaks seems to believe it is, but much of the blame for this mistake actually lies with Stratfor itself.  The group has spent over a decade trying to convince the world that it is a for-hire, cutting-edge intel firm with tentacles everywhere. Before their marketing campaign fooled Anonymous, it fooled wealthy clients; before it fooled clients, it hooked a couple of reporters.

I think The Atlantic, The Christian Science Monitor, and others who’ve criticized the revelations of the latest leak may be overstating the case (for instance, according to The Statesman, emails included in the leak reveal that Pakistani ISI leaders were in contact with Osama Bin Laden); but for the most part, this seems like more of a takedown of a low level player in the intelligence community.

On the other hand, Stratfor, which operates here in Austin, is criticized by several of the naysayers as being farce because it relies on open source data (a critique people have made of the firm for more than a decade). But as several of Bryan’s posts point out, there is nothing necessarily obvious about the patterns and even the information that emerges from the vast data available openly on the web.  The Atlantic itself reported on an AP story in November that described the CIA’s social media monitoring center, where analysts try to read 5 million Tweets a day.  And other private firms, like Recorded Future, are trying to create big data solutions to predicting events around the world by using openly available media (Open Source Intelligence, or OSINT) and sifting it through various semantic filters to find patterns.  I can’t reveal my source, but I also know for a fact that the Swedish Defense Research Agency has partnered with firms like Recorded Future to enlist their help in developing software and frameworks that are almost identical to what Stratfor claims it uses or has used in developing its intelligence.   I suppose the proof is in the pudding, but there’s nothing inherently faulty in the model they claim to use.

And in any case, the more troubling thing it reveals in the Wikileaks philosophy is the rather thin film sitting between the supposedly unique and isolated realms of journalism, intelligence, spying, academic futurism, and even corporate espionage – where sitting in one of these ill-defined categories is in itself an excuse for Anonymous and Wikileaks apparently because they have a different mission statement and political allegiances. If Coca-Cola can’t be bothered to find out the number of PETA supporters in Canada, and Stratfor has some low level interns it can put on the job instead, it is hardly a case of international significance.  I don’t approve of the actions of Coca Cola or Dow, but I don’t think contracting with them to do what amounts to basic research qualifies as a punishable offense.  If doing this justifies breaching Stratfor’s security and revealing internal e-mails to the world, where do they would draw the line?  It makes Wikileaks less a venue for revealing state secrets to the world and more of a website run by well equipped bullies with more fashionable ways of gathering secret information than even Stratfor (though one imagines Stratfor or Recorded Futures could probably make more sense of the 5,000,000 emails in aggregate.)

Still it does seem that the firm’s secondary specialty is self aggrandizement.  Stratfor founder George Friedman wrote a surprise bestseller  in 2009 where he took his futurist schtick to the next level: attempting to predict The Next 100 Years (excerpted here).  The book sounds impressive (or at least entertaining) and he followed its success with a more manageable time horizon: The Next Decade Though Friedman is obviously well read, according to Daniel Drezner’s review of the latest book, his predictions are basically meditations on two controlling factors:

In both books his prognoses are based on two factors that persist over time: geography and demographics. Geography is what you think it is–a country’s physical attributes and resources. For example, Japan’s island status and dearth of natural resources mean that it relies heavily on sea-lanes. Demographics change more rapidly than geography, but those changes take decades. So when Friedman observes that southern Europe is depopulating, he’s going to be right for quite some time.

Leaning on the se two factors, The Next Decade arrives at a few conclusions that might seem counterintuitive: The United States has devoted disproportionate resources to counterterrorism. China’s ascent has been exaggerated because rising inequality and slower economic growth will lead to domestic instability. Russia and Germany will become closer allies.

The belief that geography equals destiny, though, is complicated by the fact that geography is a constant–and constants are lousypredictors of change. Sure, Friedman sounds sensible when he forecasts that Japan, bristling at its reliance on the U.S. Navy to keep its sea-lanes open, will take a more aggressive approach to the Pacific Rim. This prediction also sounded reasonable two decades ago, when Friedman co- authored a book called The Coming War With Japan. But there are many aspects of world affairs other than geography–historical ties, technological innovation, and religious beliefs, for instance-that can influence governments.

These predictions are more along the lines of academic analysis of world history – an admirable enterprise, to be sure, but one which should teach its purveyor the dangers of making definitive statements.  His understanding of geography as a key to military and economic development figures into many historians frameworks – off the top of my head, it figures centrally into the late Giovanni Arrighi’s World Systems analysis in The Long Twentieth Century.  In that pathbreaking work (which inspired a spirited dialogue with David Harvey in the last years of Arrighi’s life) the relative geographical isolation of England and then the US allows it to devote its economic resources to a less scattered array of military hardware.  But even this is only one among many other factors – and nowhere near the only one Arrighi uses to explain the rise and fall of economic and political hegemony over the course of the last 500 years.  In other words, as one pithy review of The Coming War with Japan (1991) put it:

This one-sided, sensational book contends that a military confrontation between the United States and Japan is likely within the next 20 years. According to the authors, the issues are the same as they were in 1941: Japan needs to control access to its mineral supplies in Southeast Asia and to have an export market it can dominate. In order to do this, Japan must force the United States out of the western Pacific. There is little effort to explore the substantial differences between the 1940s and the 1990s.

In their defense, this seems to have been a popular theme in culture more generally.  Arrighi’s book (also published in the early 1990s) makes dramatic predictions about the rise and supremacy of Japan – predictions he had to revise in his follow up which explained the unpredicted (by him, anyway) rise of China.

So whether Stratfor is a good intelligence firm, a secret spy organization, or just another font of futurist analysis, we can probably assume that a good number of the 5,000,000 e-mails Anonymous hacked and Wikileaks has made available will be even less significant to the course of human events.  As with all such predictions, time will tell.

a/Udacity of/and the future

My colleagues at NITLE and I are very excited about this new development in the Stanford AI MOOC.  The professor who taught it is leaving his tenured(?) post at Stanford to found an independent for-profit education venture.  In some ways this portends a future scenario for higher education where students and faculty are displaced from the old model and institutions of higher education, roaming free and meeting as the displaced Samurai Ronin in feudal Japan. Though in this case, it doesn’t seem like the institution was keeping him from doing this (in fact, they’ve expanded their offerings based on his success.) He just likely thinks he can make more this way – and so, evidently, does Charles River Ventures who appears to be funding it.

According to this piece, they are part time instructors, not tenured profs (though this may be in reference to two other profs at Stanford who are starting a similar project, but aren’t sure if they will try to spin it off into a for-profit entity like Udacity.) Great Ronin-esqu quote here:

the students each got a letter with their grade and class rank, signed by the professors. No Stanford seal, just the professor’s name and signature. “It raises the question: Whose certification matters, for what purposes?” Michael Feldstein, a widely read educational technology blogger, told Inside Higher Ed at the time. “If individual professors can begin to certify student competence, [then] that begins to unravel the entire fabric of the institution itself.”

More posts covering this announcement:

  • this post above notes that the first class [how to build a search engine] is taught by someone from Google, and a Google exec provided a plug for the course.)
  • more here from Rueters – which I think was one of the first places for the announcement. On the question of pedagogy, he defers to the darling of Bill Gates, et. al. It’s worth noting that his style must have been mostly lecture-based to begin with since he was teaching a class of 200.)

Thrun was eloquent on the subject of how he realized that he had been running “weeder” classes, designed to be tough and make students fail and make himself, the professor, look good. Going forwards, he said, he wanted to learn from Khan Academy and build courses designed to make as many students as possible succeed — by revisiting classes and tests as many times as necessary until they really master the material.

But note here: Thrun doesn’t help them master the material. He gives them materials to help them master the ideas, on their own. One-on-one in this context means one person watching another one on a screen. It’s still largely autodidactic and the subject matter (programming) is much further on the continuum of unambiguous topics, i.e. there may be debates, but there are often clear answers to questions of programming language. It would be far more difficult to scale humanities classes, writing intensive classes, or even social sciences classes to this level. Not that it can’t be done, but as my NITLE colleague Rebecca Davis pointed out, there is little innovation in pedagogy here.

On the other hand, it is interesting to imagine what this credential will do in the future. Students get a letter with their class rank – i.e. I was student 314,567 out of 500,000 in the Spring 2014 edition of this class. There is a very specific context in which this credential will have value. The global community of programmers and computer scientists will be able to interpret the significance of this credential based on their knowledge of this highly visible, nay celebrity, class. But as usual the focus (especially in the Reuters piece) is on the cream of the crop (“248 students out of 160K got perfect scores”). It is laudable that these students – all of whom were far away from Stanford’s campus – were so well equipped (though it’s worth noting many of them could have been advanced computer scientists simply interested in the course as a kind of game). But what does it mean to be in the bottom fifth of a class of 500,000? Just that you’re a loser? Is there any possibility that you just have a different learning style that doesn’t translate to this model? Does it matter if your future career will depend on it?  And does it mean anything to be in the top fifth of a MOOC? How many of your fellow students might be third world “credential farmers” (a la gold farmers) or even AI computers being trained to learn how to do this farming automatically?  It wouldn’t take long for these subsidiary industries to catch on once it is seen as a ticket to a good job.

My next thought was that it would be hard to imagine what it would mean 5, 10 or 15 years after you take it. I suppose that doesn’t matter since by that time you’ll have had (hopefully) several jobs on top of it. And, I suppose, if the model catches on, Udacity will get a roster of alumni to build the brand.

The other trend this could point to, especially if Google is somehow involved, is the direct training of high tech labor by the companies who want to hire them. Gates is always complaining about the lack of qualified students here. This would be a workaround. Instead of giving students apprenticeships once they finish college to complete their training, simply establish an online college that trains students in the most streamlined way possible, then hire the top 0.1%. It’s unlikely to create the kind of independent, critical thinking they say they want, but it makes it easier to skim people off the top and drop them directly into the work you want them doing. Of course, it means that, for possible workers, the training they might receive, free as it might be, will be valuable mostly to the company that provides the training. If you don’t make the cut in that online class, you’ll need to go take one with Apple or Adobe until you are able to figure out what your strengths are or if you belong in one of the few lucrative professions left in the US or if you would have been better off (or happier) going into social work. From a public policy standpoint, in other words, this would largely be another example of what Siva Vaidhyanathan might call a “public failure” – a concept he develops in relation to the Google Books project.

Public failure [in contrast to market failure, its mirror image] occurs when the instruments of the state cannot satisfy public needs and deliver services effectively. This failure occurs not because the state is the inappropriate agent to solve the particular problem (although there are plenty of areas in which state service is inefficient and counterproductive); it may occur when the public sector has been intentionally dismantled, degraded, or underfunded, while expectations for its performance remain high.  Examples of public failures in the United States include military operations, prisons, health care coverage, and schooling.  The public institutions that were supposed to provide these services were prevented from doing so.  Private actors filled the vacuum, often failing spectacularly as well and costing the public more than the institutions they displaced.  In such circumstances, the failure of public institutions give rise to the circular logic that dominates political debate.  Public institutions can fail; public institutions need tax revenue; therefore we must reduce support for public institutions.  The resulting failures then supply more anecdotes supporting the view that public institutions fail by design rather than by political choice.

It’s quite possible that Udacity will be a tremendous success at delivering the kind of content it delivers.  And in so far as it is, we should do all that we can to learn from it and genuinely integrate those lessons into the dominant model of post-secondary education.  On the other hand, I think this kind of course has a very specific purpose, one which will be better complemented by a broader education that allows for more structured forms of play, interaction and discussion built into its pedagogy.  I’m very curious, for instance, to hear what Cathy Davidson might say about this model.  I think her first observation would be the way it individualizes the students and prevents them from learning from one another.  I wonder what it would look like – and what effect it would have on the class – if students built some sort of social platform alongside this to do collaborative learning exercises.

In either case, it would seem that most people would need to have more than a single class to build up a fungible credential portfolio.  Whether they would need the equivalent of 40 classes across a range of disciplines or not is up for discussion.  And much of this may simply depend on what it seems people need in order to get jobs.  I suppose the question then becomes how much we want to make the future education of our country based solely on the narrow vocational concerns of the few remaining “job creators.”  It’s a hard question to ask right now.  They say there are no atheists in foxholes; humanists in recessions fair only slightly better.

Ladies Home Journal FTW (on predictions)

In yesterday’s BBC News Magazine, Tom Geoghegan has a rundown of several predictions made by John Elfreth Watkins in the early 20th century.  Watkins wrote an article titled “What May Happen in the Next Hundred Years” for the Ladies Home Journal.  Geoghegan builds from an earlier overview of the article by Jeff Nilsson in The Saturday Evening Post, but also queries Patrick Turner from the World Future Society on how well these predictions were made.  Overall, he was quite prescient.  Moreover, he was very upbeat.  In Nilsson’s words, “Every one of his predictions involved an improvement in the lives of Americans. He saw only positive change in the new century. Today’s predictors don’t see the future so optimistically, but will they see it as clearly as Watkins?”  Some of those positive changes, predicted correctly:

  • Digital color photography
  • Rising height (correctly predicted at 2″) and increased life expectancy (lowballed: he says 50; it was 77) of Americans
  • Slower population growth
  • Mobile phones, networked computing, television, or something like them
  • Pre-prepared meals (some Hungry Man eaters may dispute this as necessarily “positive;” also he was wrong in that they unfortunately are not delivered to every household by pneumatic tube.)
  • Central heating and A/C
  • Hothouse vegetables, bigger fruit, household refrigerators, and refrigerated transport of produce across the country and the hemisphere
  • cheaper cars
  • airplanes, armored tanks, and high speed trains

What is more interesting to me, however, are the things he got wrong (and, perhaps, why).  In some cases, he might just be ahead of the curve – such as his prediction that all wild animals will be extinct (no mention of electric sheep).  In others, it is clearly a lot of political pressure that kept what was a reasonable expectation from taking hold.  Two stand out especially clearly.

The first is his prediction that, in Geoghagen’s paraphrase, there will be no cars in large cities. In Watkins’ words, “All hurry traffic will be below or above ground when brought within city limits.”  What this meant to Watkins was that people would walk, take moving walkways or trains around the city rather than driving.  This was a reasonable prediction – and would have been a reasonable way to build sustainable cities, were we at all interested in that.  At the end of the 19th and for much of the early 20th century, this was largely the trajectory, with cars being only a small component in the transportation ecosystem of major cities.  Electric streetcars, for instance, were a common utility run by municipalities across the country.  Among the changes necessary for the transformation to take hold: car companies like General Motors bought up street car systems (bought them up, that is, and eliminated them), and they were aided in their efforts by various laws, subsidies and eventually nationwide projects of investment.  As Guy Span summarizes this 20th century transformation:

While GM was engaged in what can only be described as an all out attack on transit, our government made no effort to assist traction whatsoever and streetcars began to fade in earnest after the Second World War. In 1946, the government began its Interstate Highway program, with lots of lobbying from GM, arguably the largest public works project in recorded history. In 1956, this was expanded with the National Interstate Highway and Defense Act. Gas tax funds could only be spent on more roads. More cars in service meant more gas taxes to fund more roads. And we got lots of roads.

More and better roads doomed the interurban electric railways and they fell like flies. Outstanding systems like the Chicago North Shore Line (which operated from the northern suburbs into Chicago on the elevated loop until 1962) were allowed to go bankrupt and be scrapped. The Bamburger between Salt Lake City and Ogden failed with its high-speed Brill Cars in 1952. Today, arguably only two of the vast empire of interurban systems survived: The Philadelphia and Western Suburban Railway – aka the Red Arrow Lines (now a part of SEPTA) and the Chicago South Shore and South Bend Railway (now state owned). And highways had everything to do with this extinction.

The United States government, state agencies, and local communities allowed these systems to fail. In the District of Columbia, Congress ordered the elimination of streetcars over the strong objections of the local owners and managers. The government was doing its part.

So let’s not forget the words of Charlie Wilson when asked if there were a conflict with his former employer (GM) on his possible appointment to Secretary of Defense in 1953. He replied, “I cannot conceive of one because for years, I thought what was good for our country was good for General Motors, and vice versa.”

In other words, whatever the benefits of cars over trains or streetcars, we’ll never know who could have won in that fight because, in all cases, the only way one or the other of these highly capital-intensive systems would become dominant was through public policy and state investment – and the palm-greasing and arm-twisting known as lobbying that make those things happen.  Watkins, in other words, wasn’t wrong in terms of technology: he just incorrectly predicted who would be able to graft the system better.  Streetcars, subways and intercity trains held the lead for the last quarter of the 19th and first quarter of the 20th century: then the power shifted to a new set of titans.  Go figure.  This should be a lesson to all the people who look today at Apple or Google for predictions of what the next 100 years will look like.  We should probably look downmarket for the people who will be willing to get their hands dirty with all the sick lobbying they’ll need to do to win the future.

Second, Watkins predicts that college education would be “free to every man and woman.  Here he was an excellent predictor of the post-industrial needs (and capabilities) of US society.  In fact every man and woman probably should have higher education – though what that means has certainly shifted over the past century.  We’ve come a long way from the levels of education at that point, and until recently, many European countries were close to providing this.   As Doug Henwood points out, it wouldn’t be much of a stretch to do this:

It would not be hard at all to make higher education completely free in the USA. It accounts for not quite 2% of GDP. The personal share, about 1% of GDP, is a third of the income of the richest 10,000 households in the U.S., or three months of Pentagon spending. It’s less than four months of what we waste on administrative costs by not having a single-payer health care finance system. But introduce such a proposal into an election campaign and you would be regarded as suicidally insane.

This lack of political will has resulted not only in making not free, but has made it more difficult to even pay for the loans necessary to get through college at the current rates (as he points out above, the increase in sticker price tuition is almost directly proportional to the decrease in state funding of higher ed.)  The result is that the US population is losing its lead among OECD countries (even among High School diploma attainment) and current generation of US citizens will be the first to have less (formal) education than their parents.

Like the prediction about cars (and likely, most of Watkins predictions) his prediction about higher education is not hobbled by technology (at least not only) or demand (at least not directly).  Much of this is, instead, the result of particular policies and funding priorities.  Still, it is impressive that, at the height of Robber Baron supremacy Watkins could imagine a society so publicly minded.  Perhaps we should take a cue from his positive outlook – and try to make it more likely (politically and socially) in the next century.