Writing and reading content c. 2050

Most authoring done on personal computers today is just automated paper document creation or automated letters for an automated post office. Worse, the advent of WWW browsers for the Internet disastrously turned users into simple consumers, because the browsers do not permit any kind of balanced authoring for Internet content. That this was allowed to happen is almost beyond belief and has been a terrible setback.Alan Kay

What follows is a longish riff on imaginary authoring systems. Here’s the agenda: Could the steady shift from static text to dynamic visualization now emerging in computer programming, search results in business intelligence applications, product design and business process representations eventually impact the document authoring process? Should we start envisioning the ancient labor of writing as something that can be automated through diagrams and pictures? And could we later make it available to readers as pictures or movie narratives rather than text?

Consider this: instead of writing a document section by linear section, how about designing a document as a ragbag of ideas, examples, references, and stories, that can be automatically shaped and expressed by software, drawing on content databases and programming routines?

You could select from a menu of objectives, target audiences, etc (we’ll need a very powerful taxonomy of document types, functions, approaches, dimensions etc etc) and the machinery will do all the work. Stylometric tracking of your existing authored documents could produce a style profile that would be plugged in to personalize the output. But the ultimate purpose of authoring systems would be to craft content for readers, not ‘express’ the writer. Le style, c’est l’homme-qui-lit même….

This might suggest ‘Raymond Chandler does plasma physics’ or a legal tome à la Salman Rushdie, but the idea would be to explore how far personalization can be taken beyond its current rather banal horizon. Readerly authoring like this would truly herald that death-of-the-author meme, once identified by Barthes and explained by Foucault!!

Next logical step: instead of just reading a document in the usual old way, why not automatically translate it into a series of visual representations (a sort of Flash implementation) that do such things such as highlight arguments, capture inconsistencies, pick out story-lines, unveil subtexts, identify allusions, summarize, offer counter-evidence or arguments, and so on?

Rather as we entertain the notion of textual content as a sort of virtual microworld populated with concepts and arguments and stories that tug at our hearts and minds, let’s get the machines to automatically adapt this content into a physical movie, slide show, or flow chart, or indeed invent some new medium for the dynamic expression or represntation of ideational content, that simply does the job even better than scanning print with a pencil in one’s hand.

It is not hard to recognize the kind of documents that would lend themselves to this type of translation, starting with instructional texts. The history of culture is already packed with examples of ‘media translations’ from stories and jokes to plays to films to operas to comic books to multimedia extravaganzas to radio dramas to bowdlerized editions or signed versions. Think in English of the destiny of a piece of print content such as “Alice in Wonderland”.  But in a print culture, documents were scarce and were therefore designed to last. In a digital world, this is no longer necessary. We can now leverage the extraordinary capacity for media metamorphosis into a natural value of content holders. This rich personalization (way beyond mainstream genre or media metamorphosis) of textual content would therefore be one vital area to explore.

In other words, let’s start thinking of future authors’ words as (among other things) instructions to visualize (not imagine) content externally in the real world (i.e. not in our heads). And at the same time, let’s try and think of existing authored products as instructions to a machine to make visual or multimedia displays of both conceptual and intellectual content (“It is a truth universally acknowledged, that a single man is in possession of a good fortune, must be in want of a wife”), as well as more obvious descriptions or activities identified in texts by words (“The Marquis went out at five o’clock” or “the planet Mars”).

These automated ‘authoring’ and ‘reading’ (for lack of better terms) models would in the end converge into a new vision of content. Worthwhile documents get generated automatically from a resource by specifying a purpose (e.g. write a report summarizing activities in the fishing boat construction market in Norway between 200X and 200Y) and an audience (French bankers) and the software does the rest. The report’s “text” (as we would call it still today) could then be fed into a display model, offering a number of visualizations, including good old back print on white paper, but extending in various ways into dynamic representations that draw on real time content updates, group contributions etc etc.

None of these manifestations of a document needs a long shelf life. Indeed, ‘documents’ would become but fleeting coagulations of content in the constant flow of data, rather like the motile thoughts that stream through our consciousness. Actionable content alone would congeal into sets of instructions to act (strategies, tactics, plans, etc.).

To convince you that all this is not completely off the wall, here a few leads on concepts at work in the current content visualization space (see my earlier post on this topic). You are probably familiar with visualization desktop applications that can take a boring table of numerical data and turns it into colorful pie-charts and other graphics. Well, this track of presenting data as diagrams and other visual models has been pursued much further by firms such as Anacubis that helps analysts ‘discover’ knowledge by presenting reams of content as visually engaging diagrams that highlight links and associations that get lost in the gray blur of print. Another innovator here is i2’s investigative analysis software, that will automatically translate complex time line data into visual maps that help analysts compare the causes and effects of incidents, for example. In a similar way, some search engines will now offer pictures of results, representing the usual list of URLs as trees with branches, or star-shaped clusters of links.

This type of visualization is fine for inspecting an information source domain, but it is hard to know whether you could use it to picture the meaning (or implications) of a single document. Most of us still prefer the good old Table of Contents or the index to get a intellectual handle on texts. For example, anyone who has translated a large factual book will tell you that the best place to start is the index: translate the indexed terms and you have a powerful cognitive picture of what’s in the body of the book, plus a handy terminology base. My question here would be: are there any ways in which technology can help us ‘learn’ what is in a book (maybe I mean ‘read’ it) by having the book content engineered into a graphical and above all dynamic mode?

Another visualization track that gets us a bit closer to our initial visual authoring idea covers new products coming on the market designed to help word-centric people put their ideas into visual form to drive a product design process.

N8 Systems lets business analysts describe the steps of a business process in words and then automatically converts the language into diagrams.

The N8 text modeling tool then transforms the written requirements into use case and activity diagrams. The analyst is able to see immediately where the inconsistencies in the definition of a process or workflow occur, and can then make rapid iterations to achieve and articulate the desired outcome. He can then share the process diagrams with multiple constituents in the business unit to check that the definitions are accurate, and can quickly modify them as needed.

Once satisfied with the results, the business analyst provides the diagrams to the system architect, consultant, or other solution provider to help jump-start the requirements process. Communication is enhanced as both parties share precise, accurate diagrams of the written requirements.

Stottler Henke’s SimBionic is a visual authoring tool and runtime engine for creating complex behaviors found in games and instructional media.

It uses a graphical interface to specify behaviors, so that non-programmer ‘subject matter experts’ can create them. This reduces the risk of simulation errors related to miscommunication of content between an expert and a programmer.3D software

Perhaps the best example of the power of visualization to boost authoring is found in the emerging field of Product Lifecycle Management (PLM), where the French company Dassault Systems (DS) is leading the pack with the concept of 3D for All .

I happen know a little about DS because I once did some writing work for them. To simplify drastically, PLM software uses 3D functionality to model, design, engineer and test not just a potentially manufacturable product such as a mobile phone or an airplane door, but every other aspect of that product’s lifetime, from designing the appropriate manufacturing process to the kind of factory you’d need to house it in. In other words, rather as I was imagining for texts, products become huge databases of design information that can then intercat with any number of other digital tools. The key advantage in PLM is that you can work together as a team to deszign a new product and then test it virtually in 3D, instead of wasting good money on building a whole airplane in the real world to see what happens in the wind tunnel.

But what interested me most at DS was the disruptive fact that the lowly process I was involved in used none of this superb 3D software to enhance the process of producing a complex document. As in most text authoring situations, I would guess, it was all faxes, text files and PDFs of the graphic design and endless rewritings and no real integration between designer, graphics/colors/layout, and the words. Although I notice that DS has just signed a wide-ranging alliance with Microsoft to extend PLM software to small enterprise users of the Microsoft .NET platform, I doubt that document production will enter the PLM engineering mindset any time yet. I would, however, bet that games, advanced toys, and possibly business process design will be the first future targets for 3D – and other design – software.

Andrew Joscelyne
European, a language technology industry watcher since Electric Word was first published, sometime journalist, consultant, market analyst and animateur of projects. Interested in technologies for augmenting human intellectual endeavour, multilingual méssage, the history of language machines, the future of translation, and the life of the digital mindset.


Weekly Digest

Subscribe to stay updated

MultiLingual Media LLC