The future of intellectuals is the quintessential internet trivia time suck. They produce witty, often intelligent annotated lists of historical and pop-culture references, spread out over several pages to increase the click throughs and ad revenue. It’s, fun but you know who’s cud you’re chewing.

But they usually curate some interesting digital objects. Back in June they released the list of “The 5 Most Ridiculous Pop Culture Predictions That Came True.” Social media being what it is, I just saw it today in one or another of the feeds I was monitoring. Two of them are incidental (a 1987 sitcom predicted the death of Gaddhafi almost to the day; and the chilling cover and title of OJ’s 2007 book, If I did it, was almost exactly produced in a 1999 segment of Chris Rock’s comedy show.) And back in January of 2012 we talked about their #1 choice (the predictions made by Ladies Home Journal in the early 1900s of the century hence.)

The other two are rather striking. #2 is by former NITLE fellow Alan Kay and involves his accurate 1982 predictions (and drawings) of the world of the future – which includes iPads, WiFi, computer terminals on the backs of airplane seats, digital museum guides for children, and the ability to take a course online while you sit in a bar. The latter two are still in development, but most of the others have pretty much come to pass. Kay is a smart guy and would say that the fact is predictions came true has less to do with his position as a sage: he’s actively helped to make that future. What they really show is our own lack of really imaginative visions of the future, most of which involve social rather than technological innovations.

This is nowhere more evident than in the #5 choice Cracked provides. It centers on a prognostic piece of promotional video for the created by Apple in 1987. The video imagines something called the “Knowledge Navigator,” which isn’t all that different than Vannavar Bush’s Memex or the librarian in Stephenson’s Snow Crash.  Cracked sees the video as accurately predicting the 2011 release of the Siri enhanced iPhone 4S, which “was released about two-and-a-half weeks after this video was to take place.”  It is an impressive video precisely because it is so different from the hip, quick-cut, music filled advertisement for Apple products. Instead it shows a university professor – a scientist who studies deforestation – engaged in an extensive conversation with a personal digital assistant embedded in a tablet computer on his desk.

The assistant keeps his schedule, reminds him of things he needs to pick up, receives his calls and then helps him create, over his lunch break, an advanced research project and presentation with a colleague.  The assistant helps him (and eventually his colleague, who joins them via video chat) look up scholarly journal articles, summarize their content, and even clip and manipulate data for a presentation: all using conversational verbal requests. The colleague eventually agrees to virtually stop by to supplement a lecture he was scheduled to give but has failed to prepare. It is pretty slick and in some ways exactly replicates the environment in which I do a lot of my work – though in my case Google plays a bigger role than Apple. Hangouts, search, and sharing documents are all a bigger deal than the fact that I can do it on my tablet instead of my laptop. 

As in the Keys example, Apple hardly took a back seat while this future unfolded (though Siri was actually developed by the Stanford Research Institute and then acquired by Apple in 2010). Still, this is clearly a moment where, as Raymond Williams might have put it, we can see the intention behind technological development, the social function it was meant to serve. While there are parallels between the product, what is probably most interesting is the aspect that remains impossible: the fantasy that computers will soon be capable of scouring vast reserves of information and produce knowledge based on that information.

Siri is pretty good at making appointments and sending emails (some of the functions the Knowledge Navigator.) Given this, is somewhat justified in finding this prediction true. However, when it comes to the kind of advanced searching, summarizing and synthesis demonstrated by the Apple product – the stuff that is actually interesting and visionary – we are still a long way off. 

As Dave Weinberger outlines in his 2012 book Too Big To Know (and many others have described before) the Data-Information-Knowledge-Wisdom hierarchy describes an increasingly refined process whereby messy pools of data are transformed into knowledge and then wisdom (presumedly through a better and better computer program.) While the video doesn’t give us any insight into the back-end function of the Knowledge Navigator, it appears to work like every other utopian knowledge assistant – namely as a seamless and comprehensively informed artificial intelligence able to scour journal articles and immediately provide not only a summary of the knowledge they contain, but to link them to other works.  As Weinberger complained in his book (and this excerpt): 

But knowledge is not a result merely of filtering or algorithms. It results from a far more complex process that is social, goal-driven, contextual, and culturally-bound. We get to knowledge — especially “actionable” knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles. Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.


The real problem with the DIKW pyramid is that it’s a pyramid. The image that knowledge (much less wisdom) results from applying finer-grained filters at each level, paints the wrong picture. That view is natural to the Information Age which has been all about filtering noise, reducing the flow to what is clean, clear and manageable. Knowledge is more creative, messier, harder won, and far more discontinuous.

In this regard, what is, perhaps most significant about the Knowledge Navigator and the tech fueled dreams it represents is the role of the intellectual in this environment. We don’t know what the good professor might have been doing before this moment, but it is rather telling that he has evidently only passingly kept up with his field of expertise. He’s due to give a lecture this afternoon, but only dimly recalls an article that came out months ago he meant to read, but hasn’t yet.  He is the pinnacle of the pyramid, waiting till the last minute to sort the mechanized knowledge produced by his machine (and female colleagues) and to instantly generate wisdom so compelling and obvious it can easily be demonstrated with visual animations. 

One can imagine the framework which Knowledge Navigator would work as this vision would hope, but it would rely on a vast network of other researchers who make their studies available in vast openly searchable repositories – repositories that also help curate the most important research from the past few months. Summaries might be written into the abstracts (this process is highly uneven among humans, but perhaps they would get better at it if they thought they were writing for a machine) and it might be easy to quickly surf towards the clearly truthful statements, as opposed to the usually messy and partial conclusions of the average scientific paper (much less  a paper in the social sciences, arts, or humanities.) Bracketing the fact that knowledge changes, that most questions are not so easily answered at this level of inquiry, and that most conclusions not so easily communicated, it seems a worthwhile machine to have.  Keep in mind, this is beyond the kind of Watson machine that is able to answer trivia questions on TV. This is a machine that can instantly read and summarize entire academic articles. Or so it appears.

But the most likely way for something like this to work would be through harnessing the collective wisdom of the intellectual crowd rather than expecting you can program a computer to think at the level of an advanced professor in the sciences. This, however, would presume there is still an intellectual crowd to help produce this knowledge cum wisdom. There is no hint of this being one of the mechanisms that would make the navigator function, but as scholars are increasingly realizing, open platforms make for faster feedback loops and a social (intellectual) filtering that supercedes anything an individual AI can produce, at least at the current moment. Go to Google Scholar and search for “deforestation.” Even now, in the fifth month of 2013, there are already over 5,000 citations for the term, and the search engine has no way of filtering them by discipline, other than equally baggy and imprecise methods like combining deforestation with other keywords.) On the other hand, one can easily see how a social media network like, if it really caught on and academics used it, could become a pretty decent sorting mechanism. And, of course, there is the time tested insularity of knowledge production, fully evident in the video: keeping up with your close colleagues work and becoming colleagues with the people who’s work is respected (otherwise known as good old fashioned networking.) Brewster Kahl at the Internet Archive might have set out to create a massive collection of data for the purposes of educating AI machines at some distant point in the future, but in the meantime it is likely far more useful to have a well organized archive that can be used by actual humans. 

If we think of it in social and cultural terms, then we might actually be very close to having a Knowledge Navigator. Using technology to link large scale projects, like those being harnessed together under CLIR’s Committee on Coherence at Scale, will amplify this social networking power even further. Legally even this level of technical integration is problematic. The current academic knowledge infrastructure consists of elaborately walled gardens which hamper the searches of many academics (particularly those at smaller institutions).  Getting an article – to summarize or otherwise – often requires several steps of annoying human intervention (and usually a fee!)  And economically, we are at a moment where MOOCs and virtual charter schools threaten to be the latest nail in the coffin of the US system of education – where education is so mechanical it can be performed via standardized tests and the number of professors necessary to deliver this content is significantly reduced.

And here, it seems, the desire for a Knowledge Navigator intersects with the broader crisis we are facing in higher education at large. The question of MOOCs is ultimately about how many intellectuals we need to produce higher education and specialized disciplinary research. If we only need a handful to produce the courses that will teach the masses, perhaps we can finally let all those adjuncts go: they can finally work full time at the coffee shop where they moonlight, and in their spare time they can contract with The Georgia Institute of Technology to be one of the few humans who grade the work of the paying students at the MOOC; or they can contribute to the knowledge base on which the Navigator will work. A handful of actually paid professors, sitting in their elite, woodgrain finished offices, will pause from their daily activities (picking up family from the airport and a cake for the birthday party tonight) to briefly consider the knowledge produced for them by the unpaid underclass and highly developed algorithmic machines, then deliver their wisdom via MOOC before they rush off to fulfill their social commitments.  

Who this vision of the future enhances and who it exploits will likely be one of the key struggles of the coming decade. It is hard to parse what people like Bill Gates think will come of the MOOC Moment, but clearly he would prefer we think of education as something that happens on the Surface of one of Microsoft’s tablets, rather than in the interactions that take place in the physical classroom.  At the very least, he would prefer completely disorganized, entrepreneurial teachers whose existence is, at every moment, contingent on meeting the standards set out by one or another of the for-profit companies administering our testing regime – first at the K-12 level, and now, with Pearson administering the multiple choice tests for the Georgia Institute of Tech MOOCs, in higher ed as well. 

That may indeed be the future, but, as Gerry Caravan aptly put it back in February, “MOOCs are how you’d structure higher education if you believed there were no future.”  By this, he means that, laying off the majority of your faculty (or, as in the case of Georgia Institute of Technology, hiring only 8 instructors to oversee 20,000 students) may be economically sustainable (if the only thing that matters is a commodity degree of questionable quality spit out at the end of the program); but it is likely not institutionally sustainable. Today we have well-trained Computer Science Professors who can develop the program based on today’s knowledge; and certainly there are deep pools of exploitable, underemployed adjuncts you can draw upon – adjuncts who took on debt and improved themselves through higher education because, at that point, there still held out some promise of a career replacing those Computer Science Professors at the top – both as teachers and researchers.  But who will go to graduate school in the next decade if their only hope is to become one of the handful of instructors necessary to run a MOOC? 

Failing to account for, and pay for, the continuation and reproduction of a necessary system isn’t economic rationality; it isn’t a hard-nosed commitment to making the tough choices; it’s the exact opposite. It’s living as if there is no future, no need to reproduce the systems we have now for the future generations who will eventually need them. The fantasy that we could MOOCify education this year to save money on professor labor next year, and gain a few black lines in the budget, ignores the obvious need for a higher educational system that will be able to update, replenish, and sustain the glorious MOOCiversity when that time inevitably comes. Who is supposed to develop all the new and updated MOOCs we’ll need in two, five, ten, twenty years, in response to events and discoveries and technologies we cannot yet imagine? Who is going to moderate the discussion forums, grade the tests, answer questions from the students? In what capacity and under what contract terms will these MOOC-updaters and MOOC-runners be employed? By whom? Where will they have received their training, and how will that training have been paid for? What is the business model for the MOOC — not this quarter, but this decade, this century?

These are the questions the smooth surface of the Knowledge Navigator cannot answer.  They are questions which we used to have answers to based on processes of reproduction and production that have worked fairly well for hundreds of years. There has undoubtedly been an overproduction of intellectuals on some level – if we look strictly at the employment numbers. But it seems a very cynical view of the future to think we won’t need highly trained people to do work we don’t yet understand. It appears from the BLS that this is what employers are predicting. This seems to point to the need for a more public employment of intellectual labor – akin to the public expansion of higher education in the mid-century. It doesn’t take a very wide environmental scan to realize that kind of project is not in the offing anytime soon. 

Right now it appears we are once again putting our faith in gadgetry – in part because the sale of gadgetry is far more profitable than the minimal margins on service work like actually educating people. There are few Alan Kays in the world who can imagine the beneficial uses for technology when it is embedded in a robust social context that values education and learning. And since all of us must now ensure that our contributions to the social totality are measurable in dollars, it is likely that those who already possess that capital will be ahead of the curve.


Economic Complexity Theory

I have been meaning to finish a long post on the implications I see in economic complexity theories coming out of MIT.  The basic theory is that economies are best thought of not in large terms of GDP or even exports per se, but in terms of the capabilities exports represent.  These capabilities can then be thought of as the raw materials for future exports and economic activities.  As a short NYTimes piece last spring described it:

Hidalgo and Hausmann think of economies as collections of “capabilities” that can be combined in different ways like an Erector set to produce different products. An Internet retailer, for instance, cannot function without some kind of electronic-payment network. It also needs a working system of postal addresses — not every country has one — as well as reliable mail. Because these capabilities cannot be easily identified and observed, Hausmann and Hidalgo track the silhouettes that the capabilities cast upon trade statistics. If a product is a significant part of a country’s exports, it offers evidence that the country has certain kinds of related capabilities.

The sticking point is what those mean.  In general they seem to break it down according to a distinction relevant to the statistical physicist (Hidalgo) behind the endeavor: capabilities are related to economic development as potential is related to kinetic energy.  If a country has a lot of capabilities and a low per capita GDP, then they are presumed to have high potential energy, waiting to explode into GDP growth.  On the other hand, if a country has high capabilities and high per capita GDP, its already kinetic activity has little upward potential (outside of discovering cold fusion).

As with most attempts to equate economics with physics, this one has all sorts of cultural, political and social blindspots, but it is an interesting set of abstractions that may be fun to play with.  The scholars at MIT have written a book – free on their website, of course – outlining this theory.  But real fun is in playing with their data on the MIT Media Lab website.

Experiments like this seem to illustrate just how difficult it is to predict future growth.  But they might also offer some insight into how it might be generated.  It would seem the best idea would be to help people within your country develop tacit knowledge and deep competencies and capabilities.  If only we had some sort of institutions where these kinds of things could be developed – some sort of publicly supported infrastructure of higher education. Hmmmm.

Facebook: bubble, 2.0

As Facebook prepares for their IPO, Business Week points out that they have a very meagre profit profile.

There’s no question Facebook is huge—possibly the largest digital-only social enterprise that has ever existed—and it’s still growing at a fairly rapid rate. Just a few months ago it crossed the 800-million-user mark, and it has now passed 900 million, which suggests it will probably rack up a billion active users sometime later this year. And more than half of that user base visits the site at least once a day, a level of engagement other Web services would likely kill for. [. . . .] Facebook’s problem is somewhat different: It has nearly a billion active users, but it makes a remarkably tiny amount from each one—about $5 per year. That’s not a lot, considering over half of those users visit every day. And while the amount Facebook makes from the average user rose in the most recent quarter, it grew just 6 percent. Some of the marketing costs the company is racking up are no doubt increasing that number. But how much more can Facebook squeeze out of its existing user base?

Much of the focus on the company is on its incredible number of dedicated users – which should be Web 2.0 manna. But unless the goal is to have all 6-7 billion people on the planet on Facebook, there is only so much further they can grow in that direction.  The question is how much of that dedicated user base is due to the relatively uncommercial skin of the platform?  As they try to monetize it with more ads or deeper data collection and sharing, will people run away?

This is a very specific question about Facebook, but it speaks to the general anxiety of our age: as culture can be more freely distributed and shared – arts, music, movies, education, and news – how do we develop the structures to support those activities in a sustainable way?  In this sense, there is a very fine line between micro-level hypercapitalism and no capitalism at all, at least in terms of supporting cultural production through its commodification.  We may soon have to answer some difficult questions about what kind of work we would like to support and foster as a society – and how.

Meanwhile, please share this post on Facebook.  We don’t get any money from the site, but it is always gratifying to see those pageview stats.

Big-time futuring

Would will things look like 100,000 years from now?  New Scientist suggests we imagine boldly, and with some optimism.

Alas, the articles are paywalled.  But it’s an exciting prompt to use – imagine what education would be like around 102,012 CE.

(via the LinkedIn Forecasting group)

Forecasting news across Twitter

A new research project tracks the spread of news stories across Twitter, claiming to predict how this will occur.  Fascinating idea.

The method involves analyzing a story’s content and context:

  • The news source that generates and posts the article (derived over time; a mix of traditional journalism and blogs, apparently)
  •  The category of news this article falls under (derived from Feedzilla)
  •  The subjectivity of the language in the article (interesting tools for this)
  •  Named entities mentioned in the article (a known place, person, or organization)

The category breakdown interested me.  From an SEO-ish perspective, what types of stories circulate best in the Twittersphere?

Interestingly, Huberman et al found that these categories weren’t actually that useful.  The biggest predictor?

Overall, we discovered that one of the most important predictors of popularity was the source of the article. This is in agreement with the intuition that readers are likely to be influenced by the news source that disseminates the article.

Consider it a sign that information fluency is rising.

(thanks to Mike Carpenter)

Futuring from the Ladies’ Home Journal

Some fun futuring from the past:

(thanks to Jesse Walker)

Futuring as better than curing

Another way of thinking about futures work: it’s a preventative health measure.  Mental health, in a way, or organizational:

In my view, futurism (“strategic foresight,” “scenario planning”) is a vaccination for our civilization’s immune system. It strengthens us. By introducing us to different possible futures, we become sensitive to those potential outcomes, and able to recognize their early signs. We can think about how we would respond to different futures, and argue about what would be desirable *before* it happens…

Human civilization has a weak immune system when it comes to futures. We can sometimes recognize when something big is imminent, and act. We rely on clumsy, inefficient tools like finance, religion, even “look before you leap” to make us look forward and consider our choices. So more often than not, we’re taken by surprise, shocked when something big happens “out of the blue.” We haven’t prepared for big changes. Our immune system needs to be strengthened. But how do we do something like that? (I suspect you know the answer.)

Launching this blog

This blog is a place for Sean and Bryan to do NITLE futures work.

We’ll put up a real About page shortly.  Design and content are a-comin’.