The future of intellectuals is the quintessential internet trivia time suck. They produce witty, often intelligent annotated lists of historical and pop-culture references, spread out over several pages to increase the click throughs and ad revenue. It’s, fun but you know who’s cud you’re chewing.

But they usually curate some interesting digital objects. Back in June they released the list of “The 5 Most Ridiculous Pop Culture Predictions That Came True.” Social media being what it is, I just saw it today in one or another of the feeds I was monitoring. Two of them are incidental (a 1987 sitcom predicted the death of Gaddhafi almost to the day; and the chilling cover and title of OJ’s 2007 book, If I did it, was almost exactly produced in a 1999 segment of Chris Rock’s comedy show.) And back in January of 2012 we talked about their #1 choice (the predictions made by Ladies Home Journal in the early 1900s of the century hence.)

The other two are rather striking. #2 is by former NITLE fellow Alan Kay and involves his accurate 1982 predictions (and drawings) of the world of the future – which includes iPads, WiFi, computer terminals on the backs of airplane seats, digital museum guides for children, and the ability to take a course online while you sit in a bar. The latter two are still in development, but most of the others have pretty much come to pass. Kay is a smart guy and would say that the fact is predictions came true has less to do with his position as a sage: he’s actively helped to make that future. What they really show is our own lack of really imaginative visions of the future, most of which involve social rather than technological innovations.

This is nowhere more evident than in the #5 choice Cracked provides. It centers on a prognostic piece of promotional video for the created by Apple in 1987. The video imagines something called the “Knowledge Navigator,” which isn’t all that different than Vannavar Bush’s Memex or the librarian in Stephenson’s Snow Crash.  Cracked sees the video as accurately predicting the 2011 release of the Siri enhanced iPhone 4S, which “was released about two-and-a-half weeks after this video was to take place.”  It is an impressive video precisely because it is so different from the hip, quick-cut, music filled advertisement for Apple products. Instead it shows a university professor – a scientist who studies deforestation – engaged in an extensive conversation with a personal digital assistant embedded in a tablet computer on his desk.

The assistant keeps his schedule, reminds him of things he needs to pick up, receives his calls and then helps him create, over his lunch break, an advanced research project and presentation with a colleague.  The assistant helps him (and eventually his colleague, who joins them via video chat) look up scholarly journal articles, summarize their content, and even clip and manipulate data for a presentation: all using conversational verbal requests. The colleague eventually agrees to virtually stop by to supplement a lecture he was scheduled to give but has failed to prepare. It is pretty slick and in some ways exactly replicates the environment in which I do a lot of my work – though in my case Google plays a bigger role than Apple. Hangouts, search, and sharing documents are all a bigger deal than the fact that I can do it on my tablet instead of my laptop. 

As in the Keys example, Apple hardly took a back seat while this future unfolded (though Siri was actually developed by the Stanford Research Institute and then acquired by Apple in 2010). Still, this is clearly a moment where, as Raymond Williams might have put it, we can see the intention behind technological development, the social function it was meant to serve. While there are parallels between the product, what is probably most interesting is the aspect that remains impossible: the fantasy that computers will soon be capable of scouring vast reserves of information and produce knowledge based on that information.

Siri is pretty good at making appointments and sending emails (some of the functions the Knowledge Navigator.) Given this, is somewhat justified in finding this prediction true. However, when it comes to the kind of advanced searching, summarizing and synthesis demonstrated by the Apple product – the stuff that is actually interesting and visionary – we are still a long way off. 

As Dave Weinberger outlines in his 2012 book Too Big To Know (and many others have described before) the Data-Information-Knowledge-Wisdom hierarchy describes an increasingly refined process whereby messy pools of data are transformed into knowledge and then wisdom (presumedly through a better and better computer program.) While the video doesn’t give us any insight into the back-end function of the Knowledge Navigator, it appears to work like every other utopian knowledge assistant – namely as a seamless and comprehensively informed artificial intelligence able to scour journal articles and immediately provide not only a summary of the knowledge they contain, but to link them to other works.  As Weinberger complained in his book (and this excerpt): 

But knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.


The real problem with the DIKW pyramid is that it’s a pyramid. The image that knowledge (much less wisdom) results from applying finer-grained filters at each level, paints the wrong picture. That view is natural to the Information Age which has been all about filtering noise, reducing the flow to what is clean, clear and manageable. Knowledge is more creative, messier, harder won, and far more discontinuous.

In this regard, what is, perhaps most significant about the Knowledge Navigator and the tech fueled dreams it represents is the role of the intellectual in this environment. We don’t know what the good professor might have been doing before this moment, but it is rather telling that he has evidently only passingly kept up with his field of expertise. He’s due to give a lecture this afternoon, but only dimly recalls an article that came out months ago he meant to read, but hasn’t yet.  He is the pinnacle of the pyramid, waiting till the last minute to sort the mechanized knowledge produced by his machine (and female colleagues) and to instantly generate wisdom so compelling and obvious it can easily be demonstrated with visual animations. 

One can imagine the framework which Knowledge Navigator would work as this vision would hope, but it would rely on a vast network of other researchers who make their studies available in vast openly searchable repositories – repositories that also help curate the most important research from the past few months. Summaries might be written into the abstracts (this process is highly uneven among humans, but perhaps they would get better at it if they thought they were writing for a machine) and it might be easy to quickly surf towards the clearly truthful statements, as opposed to the usually messy and partial conclusions of the average scientific paper (much less  a paper in the social sciences, arts, or humanities.) Bracketing the fact that knowledge changes, that most questions are not so easily answered at this level of inquiry, and that most conclusions not so easily communicated, it seems a worthwhile machine to have.  Keep in mind, this is beyond the kind of Watson machine that is able to answer trivia questions on TV. This is a machine that can instantly read and summarize entire academic articles. Or so it appears.

But the most likely way for something like this to work would be through harnessing the collective wisdom of the intellectual crowd rather than expecting you can program a computer to think at the level of an advanced professor in the sciences. This, however, would presume there is still an intellectual crowd to help produce this knowledge cum wisdom. There is no hint of this being one of the mechanisms that would make the navigator function, but as scholars are increasingly realizing, open platforms make for faster feedback loops and a social (intellectual) filtering that supercedes anything an individual AI can produce, at least at the current moment. Go to Google Scholar and search for “deforestation.” Even now, in the fifth month of 2013, there are already over 5,000 citations for the term, and the search engine has no way of filtering them by discipline, other than equally baggy and imprecise methods like combining deforestation with other keywords.) On the other hand, one can easily see how a social media network like, if it really caught on and academics used it, could become a pretty decent sorting mechanism. And, of course, there is the time tested insularity of knowledge production, fully evident in the video: keeping up with your close colleagues work and becoming colleagues with the people who’s work is respected (otherwise known as good old fashioned networking.) Brewster Kahl at the Internet Archive might have set out to create a massive collection of data for the purposes of educating AI machines at some distant point in the future, but in the meantime it is likely far more useful to have a well organized archive that can be used by actual humans. 

If we think of it in social and cultural terms, then we might actually be very close to having a Knowledge Navigator. Using technology to link large scale projects, like those being harnessed together under CLIR’s Committee on Coherence at Scale, will amplify this social networking power even further. Legally even this level of technical integration is problematic. The current academic knowledge infrastructure consists of elaborately walled gardens which hamper the searches of many academics (particularly those at smaller institutions).  Getting an article – to summarize or otherwise – often requires several steps of annoying human intervention (and usually a fee!)  And economically, we are at a moment where MOOCs and virtual charter schools threaten to be the latest nail in the coffin of the US system of education – where education is so mechanical it can be performed via standardized tests and the number of professors necessary to deliver this content is significantly reduced.

And here, it seems, the desire for a Knowledge Navigator intersects with the broader crisis we are facing in higher education at large. The question of MOOCs is ultimately about how many intellectuals we need to produce higher education and specialized disciplinary research. If we only need a handful to produce the courses that will teach the masses, perhaps we can finally let all those adjuncts go: they can finally work full time at the coffee shop where they moonlight, and in their spare time they can contract with The Georgia Institute of Technology to be one of the few humans who grade the work of the paying students at the MOOC; or they can contribute to the knowledge base on which the Navigator will work. A handful of actually paid professors, sitting in their elite, woodgrain finished offices, will pause from their daily activities (picking up family from the airport and a cake for the birthday party tonight) to briefly consider the knowledge produced for them by the unpaid underclass and highly developed algorithmic machines, then deliver their wisdom via MOOC before they rush off to fulfill their social commitments.  

Who this vision of the future enhances and who it exploits will likely be one of the key struggles of the coming decade. It is hard to parse what people like Bill Gates think will come of the MOOC Moment, but clearly he would prefer we think of education as something that happens on the Surface of one of Microsoft’s tablets, rather than in the interactions that take place in the physical classroom.  At the very least, he would prefer completely disorganized, entrepreneurial teachers whose existence is, at every moment, contingent on meeting the standards set out by one or another of the for-profit companies administering our testing regime – first at the K-12 level, and now, with Pearson administering the multiple choice tests for the Georgia Institute of Tech MOOCs, in higher ed as well. 

That may indeed be the future, but, as Gerry Caravan aptly put it back in February, “MOOCs are how you’d structure higher education if you believed there were no future.”  By this, he means that, laying off the majority of your faculty (or, as in the case of Georgia Institute of Technology, hiring only 8 instructors to oversee 20,000 students) may be economically sustainable (if the only thing that matters is a commodity degree of questionable quality spit out at the end of the program); but it is likely not institutionally sustainable. Today we have well-trained Computer Science Professors who can develop the program based on today’s knowledge; and certainly there are deep pools of exploitable, underemployed adjuncts you can draw upon – adjuncts who took on debt and improved themselves through higher education because, at that point, there still held out some promise of a career replacing those Computer Science Professors at the top – both as teachers and researchers.  But who will go to graduate school in the next decade if their only hope is to become one of the handful of instructors necessary to run a MOOC? 

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century?

These are the questions the smooth surface of the Knowledge Navigator cannot answer.  They are questions which we used to have answers to based on processes of reproduction and production that have worked fairly well for hundreds of years. There has undoubtedly been an overproduction of intellectuals on some level – if we look strictly at the employment numbers. But it seems a very cynical view of the future to think we won’t need highly trained people to do work we don’t yet understand. It appears from the BLS that this is what employers are predicting. This seems to point to the need for a more public employment of intellectual labor – akin to the public expansion of higher education in the mid-century. It doesn’t take a very wide environmental scan to realize that kind of project is not in the offing anytime soon. 

Right now it appears we are once again putting our faith in gadgetry – in part because the sale of gadgetry is far more profitable than the minimal margins on service work like actually educating people. There are few Alan Kays in the world who can imagine the beneficial uses for technology when it is embedded in a robust social context that values education and learning. And since all of us must now ensure that our contributions to the social totality are measurable in dollars, it is likely that those who already possess that capital will be ahead of the curve.


Futuring media

2020 Media Futures is a fine example of a collaborative futuring project.  The topic is specific (Canada’s media landscape), but the practices are quite general.

2020MF uses a variety of futures methods:

  • Environmental scanning, or signals from the media future.  These are primarily news stories, arranged under general futures rubrics (Social, Technological, Economic, Ecological, Political, Values, or STEEPV) and media-specific topics (books, tv, etc).
  • Trend analysis.  2020MF determined a series of likely, powerful forces, again arrayed against STEEPV categories.
  • Driver identification, or the forces underpinning the already-selected trends and signals.  Read the full page for a good description of how they managed the group work.
  • Critical uncertainties, the powerful forces which could still appear in very different forms, unlike trends.  Note the way these are very industry-specific:

Critical uncertainties for Canadian media in 2020.

Read the full post »

On the future, MOOCs, tenure, etc.

Last week I spoke on MOOCs in an online seminar with faculty and staff of about 40 different schools. The consensus among that group seemed to be that developing in-house online programs would be to their benefit as institutions. In other words, many of them are looking to create some form of digital teaching program in order to have a version of that product in the case that it gets higher demand from students (or parents.) Many of them are also excited by the pedagogical idea of using digital platforms differently.

This Rutgers University statement on audio/visual recording is very interesting.  In part, it is admirable in that it finds compelling legal and pedagogical issues to recommend against allowing widespread recording (and sharing via social media) course materials or classroom activities. Namely it focuses on the copyright issues that might arise if, for instance, Youtube becomes a popular place for students (or faculty) to post lecture videos or even student discussions. On the one hand, the classroom is seen as a protected space in the copyright code. As I point out below, the recent GSU lawsuit provides a case in point. On the other hand, taping conversations that go on in a classroom, and making them public, has significant pedagogical consequences. Even in the best of scenarios – where students welcome being taped, raising no privacy concerns – we risk having them act as if every meeting was a small episode of Big Brother, where the watching only makes them perform more fully for the camera, rather than engaging in the risky, personal reflection that leads to real learning. The point made by the Rutgers faculty is crucial – public conversations are not always as productive, especially for students who are just trying to figure out what they believe and what they want to learn. For them, the privacy of the classroom, and instructors’ responsibility in the conversation are essential elements for successful, critical pedagogy. Rethinking copyright and privacy seems essential for how we move forward in the open, online environment.  But so is understanding how we value our own teaching and research – and how we expect others to value it.

In this, it is hard not to read this statement in the context of the broader conversation about “disruptive innovation” and venture capital’s version of rethinking the classroom for a digital age.  This discourse is seductive because it is based on a certain promise: MOOCs and even lecture capture could be very useful for generating conversations beyond our individual classrooms, perhaps drawing our students into that broader conversation through the dynamic forums Standford, Harvard, and MIT have created in their MOOC platforms. So it is possible that educators could reach and interact with far more students than they do now – to the benefit of both themselves and their students. This, in turn, sounds very good to the U.S. education department as it is underfunded and under fire.

To them, the learning possibilities are less important than the economics of scale. MOOCs seem important to business minded administrators (and their newspaper editorial gurus e.g. Thomas Friedman) because they seem to solve the entire cost problem in education in one swoop: it is made technologically glamourous, infinitely productive, highly demanded, and incredibly cheap. It has never been a better time to be an educational entrepreneur because everything is on the table. Every other industry has had active intelligent laborers replaced with machines that could standardize their human actions, turning our jobs over to robots (or at least threatening to): we can mechanize factories to lower the labor costs of cars, why is it taking so long to do away with these pesky professors? As Andrew DelBanco recently quoted Richard Vedder, “With the possible exception of prostitution . . . teaching is the only profession that has had no productivity advance in the 2,400 years since Socrates.”

For them, the promise of MOOCs upsets much of the present infrastructure of education, especially the tenure system itself, which is the precarious, but fundamental subsidy of the entire present system. Lost in the conversations about both academic publishing and the arrival of MOOCs is the way the present infrastructure was made possible by these broader subsidies. It is true that an increasingly smaller number of faculty are put on the tenure track, but every year for the better part of the last two decades, we have turned out a fresh new crop of people hoping to secure a place in that realm, working relentlessly for free with the brass ring of tenured employment dangling ever more remotely in front of them. Those that don’t succeed at first, continue teaching as adjuncts, exploiting themselves for the larger goal of educating the next generation of Americans.

Even leaving aside the idea that shared governance might actually be more efficient (and therefore tenure a better political economic model on which to base the university) the security and responsibility of tenure is a priceless motivation of the higher education system. Removing it will create untold havoc in the intellectual economy that serves as the foundation of MOOCs. Many people have pointed out that MIT, Harvard and other elite schools are basically using their own well established brands and resources, currently with little hope of turning a profit on these activities. Already there are signs of people deciding not to go back to school, with many graduate programs reporting lower enrollments and law schools in unprecedented decline. How will MOOCs (or education in general) function without tenure as a subsidy? How will academic publishers continue to produce knowledge? The boycotts of Elsevier and others are small potatoes compared to the decimation of the free academic labor pool caused by truly unleashing market forces on higher education. Gerry Caravan had a different way of phrasing this, which got some attention back in February:

The whole post is worth reading, but this part stuck out to me especially:

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century

Related to MOOCs is the struggle over scholarly communication more generally. Academic fair use and the ability of libraries to create digital archives are under direct attack by the academic publishing industry. But it is not that industry alone. Take the recent lawsuit brought by Sage, the University of Oxford and several other major academic publishers – publishers who make their money off the very labor we all do, virtually for free, but really subsidized by the tenure system. Because a condition of our job is that we publish, we do so with little expectation of direct economic gain from these activities. These publishers sued librarians and faculty – as individuals – for the policies they had in place around online course reserves. The library provided some digital course reserves and faculty also provided some digital copies of articles and book chapters through learning management systems (LMS) – like the MOODLE platform many of us use to provide course materials. Since Georgia State University – as a public institution – has sovereign immunity in these cases, the publishers instead sued the individual administrators within the system, holding them liable for all of the activities around sharing

Publishers accused the schools of making greater use of their materials than was allowed by copyright law. But it is really about the future of academic publishers in a world where we can share digital materials far more seamlessly than ever before. And if we can be sued for sharing academic materials, what is to stop us from getting sued if a student videotapes us showing a movie clip and it becomes popular enough to attract attention? The judge in the case found that only 5% of the cases the publishers cited would fall out of even a conservative definition of fair use, and thus that faculty and librarians are largely on the right side of the law. Yet the publishers have said they were going to appeal the case, not to protect their own economic interests, but on the advice of the authors of the texts in question, who would like your students to pay their residuals check every time you have them use that article (thank you very much.)  In short, those of us in academia and academic publishing are living in a den of vipers, in which each side is willing to strike the other down for the few remaining morsels of public higher education funding.

If we can’t put course reserves on library websites, how will students not have to pay for them on MOOCs? And what does that mean in terms of it being “Open?”  In some cases, getting students to pay for the text is the primary goal of the MOOC : The economics professor at U.C. Irvine who was removed as instructor from his own MOOC, in part because he was insisting that, to participate in the MOOC, the 40,000 or so students would need to buy his $90 economics textbook. The flipside of this is what the AAUP has often feared will happen: once our lectures can be recorded, once our MOOCs created, what is to stop universities from repackaging them, assuming them to be “works for hire,” leaving faculty no claims to copyright in their teaching materials.

MOOCs might present some interesting possibilities for the students that will soon be graduating into our universities and colleges – and those who, while qualified, will not be able to afford a formal education because support for the public system is itself under fire. Since so much of their conversation with their peers will be mediated by some form of audio-visual artifact, it is intriguing to think about what the circulation of our classroom discussions might mean. It is difficult to chart a path through this landscape that doesn’t ultimately lead to a cliff on the other side. Luckily my colleagues – like Bryan Alexander – are on the case.

I’m not sure if the Rutgers Senate statement strikes the right balance, but at least it errs on the side of giving students and faculty control over how recordings will be used. Now they just need a statement of imagination outlining all the possible ways it could help enhance learning if recording were allowed.

Text analysis to anticipate genocide

HatebaseA new database project attempts to identify impending genocide by spotting key textual indicators.  It’s crowdsourced, called Hatebase, and a co-sponsor describes it like so:

Hatebase, an authoritative, multilingual, usage-based repository of structured hate speech which data-driven NGOs can use to better contextualize conversations from known conflict zones.

A fascinating idea, one part digital humanities, one part pre-crime.  It can also be localized, as a

critical concept in Hatebase is regionality: users can associate hate speech with geography, thus building a parallel dataset of “sightings” which can be monitored for frequency, localization, migration, and transformation.

(via Slashdot)

Technology’s future is S-shaped

Maybe the long boom of technological disruption will slow down, ponders sf writer Charlie Stross.

We are undeniably living through the era of the Great Acceleration; but it’s probably[*] a sigmoid curve, and we may already be past the steepest part of it. [link in original]

Ah yes, that famous S-curve.  Slow to start, fast to build, massive in effect, then ultimately tapering off, like so:

Sigmoid curve plot.

The big “S”.

So we can imagine the tide of industrial-technological change rising in the 1700s, roaring into life during the 1800s, turning into a transformational riptide through the twentieth century, and then, in the 21st, gradually… slowing… down.  The rate of innovation drops.  We become accustomed to the new.  Future shock stops shocking.

Read the full post »

Reversing the future

ImageJohn Crowley, brilliant writer of splendid speculative fiction, meditates on the future in the most recent Lapham’s. I’d like to draw attention to two main points, beyond the brooding lushness of Crowley’s prose.

First, there’s a futuring method on display.  Even if it’s tongue in cheek, the approach is both entertaining and potentially useful for group work.  The gist: reverse our expectations for the future.  You could even test it retrospectively:

if you simply reversed what the past had imagined, you got something close to the real existing present.

I’d like to try this on small groups.

Crowley then offers an example of this method, projecting one future:

Read the full post »

New Shell scenarios

Shell Oil published a set of future energy scenarios.

Two of them are older ones.  Scramble describes a global energy panic.  Blueprints assumes stronger governmental planning and control.

Two Shell scenarios in mid-stream.

Seeing how Scramble and Blueprint play out.

Shell also released two new ones:

Read the full post »

From the big past to the future

Ian Morris turns his huge historical scope around, and aims it at the next century.

By 2100 we will see cities with 140 million people. Robots will wage war. Humans, whose bodies have changed more in the last 100 years than in the previous 100,000 will “transcend biology.”

The futurist Ray Kurzweil calls this merger of human and machine intelligence “the Singularity.” Morris suggests that something like that may create new ways of capturing energy, communicating, thinking, fighting, working, loving, aging, and reproducing.

Unless, he says, we never get there. The paradox of development is that it produces forces that can cause catastrophe, if not managed properly. Climate change, Morris says, may be the “ultimate example.” The very fossil fuels that propelled social development upward after 1800 are now causing global warming.

But like earlier periods of climate change, Morris predicts, “this one will not directly cause collapse.” The truly scary thing is how people might react to the weather. Climate change could unleash famine, enormous migrations, disease, and perhaps even nuclear war.

Google didn’t catch the flu

Google Flu Trends getting it wrongGoogle’s flu predicting service has been impressively accurate in the past, but failed to apprehend the most recent American outbreak.

Google Flu Trends, which estimates prevalence from flu-related Internet searches, had drastically overestimated peak flu levels.


problems may be due to widespread media coverage of this year’s severe US flu season, including the declaration of a public-health emergency by New York state last month. The press reports may have triggered many flu-related searches by people who were not ill.

This may become a useful cautionary tale in our dawning age of big data.

A map of the next decade: the IFTF released a fun set of scenarios for the next ten years.

Why fun?  Because they proclaim these are all unlikely.  “too fast to be believable” is one set.  And “Type 2 scenarios depend on the conjunction of too many improbabilities”.

Decade map.

Read the full post »