Noticias del Software
Esta sección refleja noticias de la industria que merecen destacarse para conocer el ámbito actual y proyectado de la industria del software en Argentina y en el Mundo.
La Ciencia De Fijar Precios Al Software
Fijar precios no es una ciencia exacta, pero tampoco es magia – es influenciada por percepción que se tenga de su software, las condiciones del mercado y su valor. ¿Entonces cuál es el proceso de encontrar el precio ganador?
Marketing de software
El blog tiene entradas referidas al marketing de productos y servicios de software.
domingo, 26 de mayo de 2013
viernes, 24 de mayo de 2013
Sobre ideas sin mercados
16:28
Juan MC Larrosa
No comments
Demand, supply and intermediaries: unhelpful labels
Demand, supply and the market
We know that the separation between the demand and supply of research is artificial: Ideas emerge and are used in complex systems in which players interact with each other and often perform several roles at once.
The labels, demand and supply, come from the metaphor of the marketplace of ideas which was introduced to the world of policymaking and think tanks in the 1970s in the United States. Back then, words that had been used in business and marketing broke into the world of public policy and never left. But recently, many research funders and several researchers and practitioners in the field of ‘bridging research and policy’ have adopted the metaphor, only this time in a literal manner.
K* jargon
This is a mistake. One thing is to say that there is something like a marketplace of ideas in which ideas are exchanged and another is to attempt to structure the research-policy space like a market. This is just not the case.
We also know that the policy process involves complex relationships between people, organisations and institutions that are sustained by different degrees and types of communication between parties.
In other words, everyone is an intermediary to someone else. But this intermediation is different from what the proponents of the concepts of knowledge brokering, knowledge intermediation, etc (what has been called K*) argue for. This type of intermediation responds to Robert Hoppe’s concept of boundary workers (or the broader concept of boundary organisations). Boundary workers:
Unlike the intermediary that sits ‘in-between’ two or more separate players or communities, a boundary worker (or an organisation homo mediaticus) must abide by and is accountable to the rules of the communities it seeks to bring together. In other words, and in the particular cases that this blog deals with, a boundary worker would part of both the research and the policymaking communities. And its success as a boundary worker is greatly dependent on its ability to:
- Be an active and respected member of the various communities that it seeks to bring together; and
- Add value to that interaction by undertaking research, analysis, and/or reflection, and/or the application of ideas into practical actions.
It is not therefore just a matter of being a specialist in intermediation -whatever that means. An effective boundary worker is competent in the trades of the communities it brings together and adds value to the interaction by its own interventions. And it is by combining both memberships that it things comes together. Think tanks can be seen as boundary workers between academia and policymaking (and the media, political parties, corporations, NGOs, etc. depending on the focus and scope of their work). But research centres in universities and policy analysis units in ministries could just as well play that role between a number of other actors. The media, too, can present this quality.
We ought to be more nuanced in the way that we try to study and understand the research and policy communities, and their overlaps, across developing countries and in each case in particular. Making broad generalisations is unhelpful; as is relying on labels. And a shame, too: the apparent messiness of the system and attempting to make sense of it is much more interesting.
Politics and Ideas
miércoles, 22 de mayo de 2013
Valor económico: ¿Cuánto vale la marca?
13:58
Juan MC Larrosa
No comments
Brand value
BRANDS are basically a promise. They tell consumers what quality to expect from a product and show off its personality. Firms invest a lot on the image of their brands to foster sales and loyalty. But measuring their value is hard. Millward Brown, a market-research company, is one of several that takes a stab at it. It has just published its annual ranking of the world's "most powerful" brands based on consumers' perceptions and the performance of the companies that own them.
The top 100 are collectively worth $2.6 trillion, the firm reckons. Apple remains the world's most valuable brand, worth $185 billion, at the head of a trio of technology companies. None has increased much in value, however, since 2012 perhaps because they have been refining their products rather than being startlingly innovative. Microsoft, which tried to be startling by launching a radical new operating system, has seen its brand value fall. Apple's big rival, Samsung, jumped 25 places, partly by out-innovating Apple and partly by boosting its advertising expenditure by $1.6 billion.
Visa was one of the main brand sponsors for the 2012 Olympic games in London. But many of the big gainers profited from growth in emerging markets. That helps explain the jump in the value of beer brands like Brazil's Brahma, which is worth 61% more than last year. Tencent, an internet services portal, benefited from being innovative and Chinese. As sales slowed in Europe, Zara, a high-street fashion retailer launched online shopping for customers in China.
Luxury goods companies groom their brands even more carefully than most. Gucci, whose brand value increased by almost 50%, has invested in technology to support its online and mobile presence. The biggest riser this year, though, is Prada, whose brand value surged 63% as it boosted sales in both old markets and new. But even in Western Europe its most avid customers were Asian tourists.
Honor al creador del GIF
13:49
Juan MC Larrosa
No comments
An Honor for the Creator of the GIF
By AMY O'LEARY
Among the thousands of file formats that exist in modern computing, the GIF, or Graphics Interchange Format, has attained celebrity status in a sea of lesser-known BMPs, RIPs, FIGs and MIFFs. It was honored as a “word of the year” in 2012, and Tuesday night, its inventor, Steve Wilhite, will be accepting a lifetime achievement award at The Webby Awards.
Now, almost any fragment of digital culture can be spun up into a grainy, gratifying animation. GIFs provide a platform for nearly everything, it seems — from rapid-fire political commentary to digital art to small moments of celebrity intrigue.
Has any file format received more attention, more accolades (or had more fun) than the GIF?
Invented in 1987, today the GIF has become the aesthetic calling card of modern Internet culture. Even Yahoo released one to announce the company’s acquisition of Tumblr this week, seen below.
“It’s been an incredibly enduring piece of technology,” said David-Michel Davies, the executive director of The Webby Awards. “Even as bandwidth has expanded,” he said, “it has been very exciting to see how much cultural cachet the format has gotten.”
But back in 1987, such things could not be imagined. Dial-up speeds were achingly slow. Image downloads were made even worse by interoperability problems. An article that year in the magazine, “Online Today” described the problem:
“Horror stories about incompatible microcomputers may be humorous when everyone is in a good mood, but they are certainly the nemesis of any serious computer user. The frustration is no laughing matter when a person wants to transfer some data or a graphics image, and the system doesn’t cooperate.”
Mr. Wilhite, then working at CompuServe (the nation’s first major online service) knew the company wanted to display things like color weather maps. Because he had an interest in compression technologies, Mr. Wilhite thought he could help.
“I saw the format I wanted in my head and then I started programming,” he said in an e-mail. (He primarily uses e-mail to communicate now, after suffering a stroke in 2000.) The first image he created was a picture of an airplane.
The prototype took about a month and the format was released in June 1987.
“I remember when other people saw the GIF,” he said. Colleagues abandoned work on on other black and white formats, he said, as graphics experts began to spread the GIF online. A triumph of speed and compression, the GIF was able to move as fast as Internet culture itself, and has today become the ultimate meme-maker.
In the last decade, the animated GIF has reigned supreme, and while Mr. Wilhite has never himself made an animated GIF, he said the classic, “dancing baby” from 1996 remains a favorite.
Since retiring in 2001, Mr. Wilhite has led a quieter existence than his creation. He goes on RV trips. He built a house in the country with a lot of lawn to mow. He dabbles in color photography and Java programming. He uses e-mail and Facebook to keep up with family.
He is proud of the GIF, but remains annoyed that there is still any debate over the pronunciation of the format.
“The Oxford English Dictionary accepts both pronunciations,” Mr. Wilhite said. “They are wrong. It is a soft ‘G,’ pronounced ‘jif.’ End of story.”
The webcast of Mr. Wilhite’s Webby Award acceptance speech will be on YouTube on Wednesday.
The webcast of Mr. Wilhite’s Webby Award acceptance speech will be on YouTube on Wednesday.
Bits Blog
lunes, 20 de mayo de 2013
jueves, 16 de mayo de 2013
Interconexión alta y de mala calidad en Argentina
7:01
Juan MC Larrosa
No comments
Internet: los argentinos, hiperconectados, pero con mala calidad y precios altos
Con el 75% de la población online, nuestro país lidera en América latina en cantidad de usuarios, pero sigue retrasándose en la velocidad de conexión
Unos 30 millones de argentinos (el 75% de la población) tiene conexión a Internet. En promedio, pasan 26,3 horas mensuales frente a alguna pantalla, la mayor parte de ese tiempo en su casa, pero también en el trabajo, y si hay señal 3G, en la calle o el transporte público.
En América latina, la Argentina está al tope del ranking en penetración del servicio, pero va quedando a la cola en calidad de conexión: la velocidad promedio prometida por los proveedores es de 7 megas en el área metropolitana, según un estudio de la Universidad de San Andrés (Udesa). Sin embargo, esa velocidad cae sustancialmente en las provincias y abre una brecha social que programas como Conectar Igualdad -de reparto de notebooks y conectividad en escuelas- aún no han demostrado resolver. En tanto, del plan estatal de tendido de fibra óptica aún no entró en servicio ni un solo kilómetro.
En la Capital hay proveedores que ofrecen velocidades residenciales superiores, como los 100 megas promocionados por Telecentro. En ese segmento, Cablevisión prevé llevar este año su velocidad residencial masiva de los 6 megas actuales a entre 10 y 15 megas. Ahora, su producto residencial de más alta velocidad es de 30 megas. Lejos de esos anchos de banda a nivel masivo, las telefónicas anuncian fuertes inversiones para reconvertir sus redes, que en la mayor parte de sus coberturas les impiden dar velocidades superiores a los 3 megas.
Aunque rezagados respecto de la inflación (sólo aumentaron alrededor del 10% en un año), los precios de las suscripciones al servicio siguen estando entre los más caros de la región. "Entre quienes no tienen Internet en la Argentina, el 56% dice que no puede pagarla", afirmó Hernán Galperín, director del Centro de Tecnología y Sociedad de la Udesa.
Mañana, cuando se celebre el día de Internet, la Red encontrará este panorama en la Argentina, donde aún no se perciben localmente debates que en el mundo tienden a recalentarse: ¿quién es el dueño de la Red? ¿Cuál es el papel de los organismos que la gestionan? ¿El Estado debe regularla?
El año pasado, la Argentina llegó a 6,3 millones de conexiones de banda ancha tanto fijas como móviles, pero, según una proyección de Cisco, para 2016, llegarán a 9,7 millones. En el área metropolitana, según una investigación reciente de la Universidad Argentina de la Empresa (UADE), el 32% de las personas está constantemente conectada; 21% se conecta una vez al día y otro 21% dice no conectarse nunca. Según el mismo trabajo, una de cada diez personas afirma haber conseguido pareja por medio de la Web.
Según Gonzalo Hita, gerente comercial de Cablevisión, "el consumo de ancho de banda por cliente crece el 50% interanual", debido -entre otras cosas- a la mayor cantidad de dispositivos y al creciente consumo de video. Sin embargo, los servicios -entre ellos el correo electrónico- siguen al tope de las actividades más realizadas online: 89%. Pero ahí nomás le sigue la gran pasión digital argentina: 81% de los internautas criollos usa redes sociales y de entre ellos Facebook lidera ampliamente, con 12,8 millones de usuarios únicos mensuales mayores de 15 años, según la última medición de la consultora Comscore. Contra lo que podría creerse, no es Twitter la segunda opción más preferida (con 3,3 millones de usuarios) sino la profesional Linkedin, con 3,4 millones. El año pasado, la Argentina se trepó al segundo lugar en el podio de países en los que más tiempo se gasta en redes sociales: con 10,4 horas mensuales, fue apenas superado por Rusia, con 10,7 horas; ambos muy por encima de Tailandia (8,8 horas) y Turquía (8,3 horas).
Según la encuesta de la UADE, que abarcó a 1200 personas mayores de 18 años, el 52% de los consultados no cree que los contenidos que publica en las redes sociales puedan ser perjudicial para su seguridad o la de su familia. Sólo el 33% confesó ese temor y el 15% dijo no saberlo.
"Si bien el nivel de penetración es alto, existe una brecha a nivel nacional bastante pronunciada si se compara la cantidad de conexiones a banda ancha que hay en Capital y conurbano con las del resto de las provincias. Para terminar con esta inequidad y en pos de una banda ancha federal, no sólo en cuanto a cantidad de conexiones sino también a calidad de velocidad para que no haya usuarios de primera y de segunda, es un tema sobre el que nuestra cámara trabaja impulsando la creación de centros de interconexión a Internet regionales", afirmó Ariel Graizer, presidente de la Cámara Argentina de Internet (Cabase), que viene instalando esos centros de intercambio local de datos que permiten bajar los costos y mejorar la calidad en las provincias.
Buena parte del tráfico generado por las redes sociales proviene de teléfonos móviles. "Casi el 40% de los celulares, es decir unos 20 millones de terminales, está accediendo a Internet ya sea mediante paquetes de datos o por medio de redes Wi-Fi", afirmó Alejandro Prince, de Prince Consulting. Aunque las empresas de telecomunicaciones son medianamente optimistas, la conectividad móvil a la Web no cambiaría hasta que el gobierno nacional haga algo con el espectro para 3G disponible y sin uso, ahora en manos de la estatal Arsat. Y, sobre todo, hasta que dé alguna señal sobre qué hará con el espectro necesario para LTE.
"No hay duda de que la gente tiene una necesidad fundamental. Y lo vemos en el crecimiento acelerado de los smartphones, también en los clientes prepagos de segmentos de menores ingresos que buscan equipos con acceso a Internet para poder acceder a redes sociales, chat, o consultar el mapa para ver dónde van", explicó Fernando del Río, gerente comercial de Claro Argentina.
Voceros de Telecom Personal coincidieron en que "las conexiones a Internet móvil crecieron exponencialmente en los últimos años por la ampliación de las redes 3G, las ofertas de planes y servicios que integran datos"..
viernes, 10 de mayo de 2013
El digital divide no llegó a la Antártida
7:01
Juan MC Larrosa
No comments
Antártida sobrepasa a Cuba e Irak en internet hosts
La revista news.techeye.net señala que la Antártida o Antártica, o sea el continente en el que está el Polo Sur ha tomado la delantera a Cuba, Jamaica e Irak en lo referente a los internet hosts.
El término internet hosts se usa en informática para describir computadoras conectadas a una red. El hosts o anfitrión tiene su propia dirección de internet o dirección IP.
La tabla de hosts la encabeza Estados Unidos con 505 millones, seguido de Japón, Brasil, Italia y China.
El especialista Mike Magee señala que Polonia, Argentina y Canadá se ubican delante del Reino Unido (8.107 millones), mientras que la India - con una población de 1,2 mil millones, tiene 6.746.000 internet hosts. China, con una población similar tiene 20.602.000.
martinoticias
El término internet hosts se usa en informática para describir computadoras conectadas a una red.
La revista news.techeye.net señala que la Antártida o Antártica, o sea el continente en el que está el Polo Sur ha tomado la delantera a Cuba, Jamaica e Irak en lo referente a los internet hosts.
El término internet hosts se usa en informática para describir computadoras conectadas a una red. El hosts o anfitrión tiene su propia dirección de internet o dirección IP.
La tabla de hosts la encabeza Estados Unidos con 505 millones, seguido de Japón, Brasil, Italia y China.
El especialista Mike Magee señala que Polonia, Argentina y Canadá se ubican delante del Reino Unido (8.107 millones), mientras que la India - con una población de 1,2 mil millones, tiene 6.746.000 internet hosts. China, con una población similar tiene 20.602.000.
martinoticias
lunes, 6 de mayo de 2013
Papel y pantalla digital: No tan perfectamente sustitutos
10:38
Juan MC Larrosa
No comments
The Reading Brain in the Digital Age: The Science of Paper versus Screens
In a viral YouTube video from October 2011 a one-year-old girl sweeps her fingers across an iPad's touchscreen, shuffling groups of icons. In the following scenes she appears to pinch, swipe and prod the pages of paper magazines as though they too were screens. When nothing happens, she pushes against her leg, confirming that her finger works just fine—or so a title card would have us believe.
The girl's father, Jean-Louis Constanza, presents "A Magazine Is an iPad That Does Not Work" as naturalistic observation—a Jane Goodall among the chimps moment—that reveals a generational transition. "Technology codes our minds," he writes in the video's description. "Magazines are now useless and impossible to understand, for digital natives"—that is, for people who have been interacting with digital technologies from a very early age.
Perhaps his daughter really did expect the paper magazines to respond the same way an iPad would. Or maybe she had no expectations at all—maybe she just wanted to touch the magazines. Babies touch everything. Young children who have never seen a tablet like the iPad or an e-reader like the Kindle will still reach out and run their fingers across the pages of a paper book; they will jab at an illustration they like; heck, they will even taste the corner of a book. Today's so-called digital natives still interact with a mix of paper magazines and books, as well as tablets, smartphones and e-readers; using one kind of technology does not preclude them from understanding another.
Nevertheless, the video brings into focus an important question: How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us, but to just about everyone who reads—to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading—but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin?
Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.
Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.
Navigating textual landscapes
Understanding how reading on paper is different from reading on screens requires some explanation of how the brain interprets written language. We often think of reading as a cerebral activity concerned with the abstract—with thoughts and ideas, tone and themes, metaphors and motifs. As far as our brains are concerned, however, text is a tangible part of the physical world we inhabit. In fact, the brain essentially regards letters as physical objects because it does not really have another way of understanding them. As Wolf explains in her book Proust and the Squid, we are not born with brain circuits dedicated to reading. After all, we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So the human brain improvises a brand-new circuit for reading by weaving together various regions of neural tissue devoted to other abilities, such as spoken language, motor coordination and vision.
Some of these repurposed brain regions are specialized for object recognition—they are networks of neurons that help us instantly distinguish an apple from an orange, for example, yet classify both as fruit. Just as we learn that certain features—roundness, a twiggy stem, smooth skin—characterize an apple, we learn to recognize each letter by its particular arrangement of lines, curves and hollow spaces. Some of the earliest forms of writing, such as Sumerian cuneiform, began as characters shaped like the objects they represented—a person's head, an ear of barley, a fish. Some researchers see traces of these origins in modern alphabets: C as crescent moon, S as snake. Especially intricate characters—such as Chinese hanzi and Japanese kanji—activate motor regions in the brain involved in forming those characters on paper: The brain literally goes through the motions of writing when reading, even if the hands are empty. Researchers recently discovered that the same thing happens in a milder way when some people read cursive.
Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text in which meaning is anchored to structure. The exact nature of such representations remains unclear, but they arelikely similar to the mental maps we create of terrain—such as mountains and trails—and of man-made physical spaces, such as apartments and offices. Both anecdotally and in published studies, people report that when trying to locate a particular piece of written information they often remember where in the text it appeared. We might recall that we passed the red farmhouse near the start of the trail before we started climbing uphill through the forest; in a similar way, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett on the bottom of the left-hand page in one of the earlier chapters.
In most cases, paper books have more obvious topography than onscreen text. An open paperback presents a reader with two clearly defined domains—the left and right pages—and a total of eight corners with which to orient oneself. A reader can focus on a single page of a paper book without losing sight of the whole text: one can see where the book begins and ends and where one page is in relation to those borders. One can even feel the thickness of the pages read in one hand and pages to be read in the other. Turning the pages of a paper book is like leaving one footprint after another on the trail—there's a rhythm to it and a visible record of how far one has traveled. All these features not only make text in a paper book easily navigable, they also make it easier to form a coherent mental map of the text.
In contrast, most screens, e-readers, smartphones and tablets interfere with intuitive navigation of a text and inhibit people from mapping the journey in their minds. A reader of digital text might scroll through a seamless stream of words, tap forward one page at a time or use the search function to immediately locate a particular phrase—but it is difficult to see any one passage in the context of the entire text. As an analogy, imagine if Google Maps allowed people to navigate street by individual street, as well as to teleport to any specific address, but prevented them from zooming out to see a neighborhood, state or country. Although e-readers like the Kindle and tablets like the iPad re-create pagination—sometimes complete with page numbers, headers and illustrations—the screen only displays a single virtual page: it is there and then it is gone. Instead of hiking the trail yourself, the trees, rocks and moss move past you in flashes with no trace of what came before and no way to see what lies ahead.
"The implicit feel of where you are in a physical book turns out to be more important than we realized," says Abigail Sellen of Microsoft Research Cambridge in England and co-author of The Myth of the Paperless Office. "Only when you get an e-book do you start to miss it. I don't think e-book manufacturers have thought enough about how you might visualize where you are in a book."
At least a few studies suggest that by limiting the way people navigate texts, screens impair comprehension. In a study published in January 2013 Anne Mangen of the University of Stavanger in Norway and her colleagues asked 72 10th-grade students of similar reading ability to study one narrative and one expository text, each about 1,500 words in length. Half the students read the texts on paper and half read them in pdf files on computers with 15-inch liquid-crystal display (LCD) monitors. Afterward, students completed reading-comprehension tests consisting of multiple-choice and short-answer questions, during which they had access to the texts. Students who read the texts on computers performed a little worse than students who read on paper.
Based on observations during the study, Mangen thinks that students reading pdf files had a more difficult time finding particular information when referencing the texts. Volunteers on computers could only scroll or click through the pdfs one section at a time, whereas students reading on paper could hold the text in its entirety in their hands and quickly switch between different pages. Because of their easy navigability, paper books and documents may be better suited to absorption in a text. "The ease with which you can find out the beginning, end and everything inbetween and the constant connection to your path, your progress in the text, might be some way of making it less taxing cognitively, so you have more free capacity for comprehension," Mangen says.
Supporting this research, surveys indicate that screens and e-readers interfere with two other important aspects of navigating texts: serendipity and a sense of control.People report that they enjoy flipping to a previous section of a paper book when a sentence surfaces a memory of something they read earlier, for example, or quickly scanning ahead on a whim. People also like to have as much control over a text as possible—to highlight with chemical ink, easily write notes to themselves in the margins as well as deform the paper however they choose.
Because of these preferences—and because getting away from multipurpose screens improves concentration—people consistently say that when they really want to dive into a text, they read it on paper. In a 2011 survey of graduate students at National Taiwan University, the majority reported browsing a few paragraphs online before printing out the whole text for more in-depth reading. A 2008 survey of millennials (people born between 1980 and the early 2000s) at Salve Regina University in Rhode Island concluded that, "when it comes to reading a book, even they prefer good, old-fashioned print". And in a 2003 study conducted at the National Autonomous University of Mexico, nearly 80 percent of 687 surveyed students preferred to read text on paper as opposed to on a screen in order to "understand it with clarity".
Surveys and consumer reports also suggest that the sensory experiences typically associated with reading—especially tactile experiences—matter to people more than one might assume. Text on a computer, an e-reader and—somewhat ironically—on any touch-screen device is far more intangible than text on paper. Whereas a paper book is made from pages of printed letters fixed in a particular arrangement, the text that appears on a screen is not part of the device's hardware—it is an ephemeral image. When reading a paper book, one can feel the paper and ink and smooth or fold a page with one's fingers; the pages make a distinctive sound when turned; and underlining or highlighting a sentence with ink permanently alters the paper's chemistry. So far, digital texts have not satisfyingly replicated this kind of tactility (although some companies are innovating, at least with keyboards).
Paper books also have an immediately discernible size, shape and weight. We might refer to a hardcover edition of War and Peace as a hefty tome or a paperback Heart of Darkness as a slim volume. In contrast, although a digital text has a length—which is sometimes represented with a scroll or progress bar—it has no obvious shape or thickness. An e-reader always weighs the same, regardless of whether you are reading Proust's magnum opus or one of Hemingway's short stories. Some researchers have found that these discrepancies create enough "haptic dissonance" to dissuade some people from using e-readers. People expect books to look, feel and even smell a certain way; when they do not, reading sometimes becomes less enjoyable or even unpleasant. For others, the convenience of a slim portable e-reader outweighs any attachment they might have to the feel of paper books.
Exhaustive reading
Although many old and recent studies conclude that people understand what they read on paper more thoroughly than what they read on screens, the differences are often small. Some experiments, however, suggest that researchers should look not just at immediate reading comprehension, but also at long-term memory. In a 2003 study Kate Garland of the University of Leicester and her colleagues asked 50 British college students to read study material from an introductory economics course either on a computer monitor or in a spiral-bound booklet. After 20 minutes of reading Garland and her colleagues quizzed the students with multiple-choice questions. Students scored equally well regardless of the medium, but differed in how they remembered the information.
Psychologists distinguish between remembering something—which is to recall a piece of information along with contextual details, such as where, when and how one learned it—and knowing something, which is feeling that something is true without remembering how one learned the information. Generally, remembering is a weaker form of memory that is likely to fade unless it is converted into more stable, long-term memory that is "known" from then on. When taking the quiz, volunteers who had read study material on a monitor relied much more on remembering than on knowing, whereas students who read on paper depended equally on remembering and knowing. Garland and her colleagues think that students who read on paper learned the study material more thoroughly more quickly; they did not have to spend a lot of time searching their minds for information from the text, trying to trigger the right memory—they often just knew the answers.
Other researchers have suggested that people comprehend less when they read on a screen because screen-based reading is more physically and mentally taxing than reading on paper. E-ink is easy on the eyes because it reflects ambient light just like a paper book, but computer screens, smartphones and tablets like the iPad shine light directly into people's faces. Depending on the model of the device, glare, pixilation and flickers can also tire the eyes. LCDs are certainly gentler on eyes than their predecessor, cathode-ray tubes (CRT), but prolonged reading on glossy self-illuminated screens can cause eyestrain, headaches and blurred vision. Such symptoms are so common among people who read on screens—affecting around 70 percent of people who work long hours in front of computers—that the American Optometric Association officially recognizes computer vision syndrome.
Erik Wästlund of Karlstad University in Sweden has conducted some particularly rigorous research on whether paper or screens demand more physical and cognitive resources. In one of his experiments 72 volunteers completed the Higher Education Entrance Examination READ test—a 30-minute, Swedish-language reading-comprehension exam consisting of multiple-choice questions about five texts averaging 1,000 words each. People who took the test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper.
In another set of experiments 82 volunteers completed the READ test on computers, either as a paginated document or as a continuous piece of text. Afterward researchers assessed the students' attention and working memory, which is a collection of mental talents that allow people to temporarily store and manipulate information in their minds. Volunteers had to quickly close a series of pop-up windows, for example, sort virtual cards or remember digits that flashed on a screen. Like many cognitive abilities, working memory is a finite resource that diminishes with exertion.
Although people in both groups performed equally well on the READ test, those who had to scroll through the continuous text did not do as well on the attention and working-memory tests. Wästlund thinks that scrolling—which requires a reader to consciously focus on both the text and how they are moving it—drains more mental resources than turning or clicking a page, which are simpler and more automatic gestures. A 2004 study conducted at the University of Central Florida reached similar conclusions.
Attitude adjustments
An emerging collection of studies emphasizes that in addition to screens possibly taxing people's attention more than paper, people do not always bring as much mental effort to screens in the first place. Subconsciously, many people may think of reading on a computer or tablet as a less serious affair than reading on paper. Based on a detailed 2005 survey of 113 people in northern California, Ziming Liu of San Jose State University concluded that people reading on screens take a lot of shortcuts—they spend more time browsing, scanning and hunting for keywords compared with people reading on paper, and are more likely to read a document once, and only once.
When reading on screens, people seem less inclined to engage in what psychologists call metacognitive learning regulation—strategies such as setting specific goals, rereading difficult sections and checking how much one has understood along the way. In a 2011 experiment at the Technion–Israel Institute of Technology, college students took multiple-choice exams about expository texts either on computers or on paper. Researchers limited half the volunteers to a meager seven minutes of study time; the other half could review the text for as long as they liked. When under pressure to read quickly, students using computers and paper performed equally well. When managing their own study time, however, volunteers using paper scored about 10 percentage points higher. Presumably, students using paper approached the exam with a more studious frame of mind than their screen-reading peers, and more effectively directed their attention and working memory.
Perhaps, then, any discrepancies in reading comprehension between paper and screens will shrink as people's attitudes continue to change. The star of "A Magazine Is an iPad That Does Not Work" is three-and-a-half years old today and no longer interacts with paper magazines as though they were touchscreens, her father says. Perhaps she and her peers will grow up without the subtle bias against screens that seems to lurk in the minds of older generations. In current research for Microsoft, Sellen has learned that many people do not feel much ownership of e-books because of their impermanence and intangibility: "They think of using an e-book, not owning an e-book," she says. Participants in her studies say that when they really like an electronic book, they go out and get the paper version. This reminds Sellen of people's early opinions of digital music, which she has also studied. Despite initial resistance, people love curating, organizing and sharing digital music today. Attitudes toward e-books may transition in a similar way, especially if e-readers and tablets allow more sharing and social interaction than they currently do. Books on the Kindle can only be loaned once, for example.
To date, many engineers, designers and user-interface experts have worked hard to make reading on an e-reader or tablet as close to reading on paper as possible. E-ink resembles chemical ink and the simple layout of the Kindle's screen looks like a page in a paperback. Likewise, Apple's iBooks attempts to simulate the overall aesthetic of paper books, including somewhat realistic page-turning. Jaejeung Kim of KAIST Institute of Information Technology Convergence in South Korea and his colleagues have designed an innovative and unreleased interface that makes iBooks seem primitive. When using their interface, one can see the many individual pages one has read on the left side of the tablet and all the unread pages on the right side, as if holding a paperback in one's hands. A reader can also flip bundles of pages at a time with a flick of a finger.
But why, one could ask, are we working so hard to make reading with new technologies like tablets and e-readers so similar to the experience of reading on the very ancient technology that is paper? Why not keep paper and evolve screen-based reading into something else entirely? Screens obviously offer readers experiences that paper cannot. Scrolling may not be the ideal way to navigate a text as long and dense as Moby Dick, but the New York Times, Washington Post, ESPN and other media outlets have created beautiful, highly visual articles that depend entirely on scrollingand could not appear in print in the same way. Some Web comics and infographicsturn scrolling into a strength rather than a weakness. Similarly, Robin Sloan has pioneered the tap essay for mobile devices. The immensely popular interactive Scale of the Universe tool could not have been made on paper in any practical way. New e-publishing companies like Atavist offer tablet readers long-form journalism with embedded interactive graphics, maps, timelines, animations and sound tracks. And some writers are pairing up with computer programmers to produce ever more sophisticated interactive fiction and nonfiction in which one's choices determine what one reads, hears and sees next.
When it comes to intensively reading long pieces of plain text, paper and ink may still have the advantage. But text is not the only way to read.
Scientific American
E-readers and tablets are becoming more popular as such technologies improve, but research suggests that reading on paper still boasts unique advantages
By Ferris Jabr
In a viral YouTube video from October 2011 a one-year-old girl sweeps her fingers across an iPad's touchscreen, shuffling groups of icons. In the following scenes she appears to pinch, swipe and prod the pages of paper magazines as though they too were screens. When nothing happens, she pushes against her leg, confirming that her finger works just fine—or so a title card would have us believe.
The girl's father, Jean-Louis Constanza, presents "A Magazine Is an iPad That Does Not Work" as naturalistic observation—a Jane Goodall among the chimps moment—that reveals a generational transition. "Technology codes our minds," he writes in the video's description. "Magazines are now useless and impossible to understand, for digital natives"—that is, for people who have been interacting with digital technologies from a very early age.
Perhaps his daughter really did expect the paper magazines to respond the same way an iPad would. Or maybe she had no expectations at all—maybe she just wanted to touch the magazines. Babies touch everything. Young children who have never seen a tablet like the iPad or an e-reader like the Kindle will still reach out and run their fingers across the pages of a paper book; they will jab at an illustration they like; heck, they will even taste the corner of a book. Today's so-called digital natives still interact with a mix of paper magazines and books, as well as tablets, smartphones and e-readers; using one kind of technology does not preclude them from understanding another.
Nevertheless, the video brings into focus an important question: How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us, but to just about everyone who reads—to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading—but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin?
Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.
Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.
"There is physicality in reading," says developmental psychologist and cognitive scientist Maryanne Wolf of Tufts University, "maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms, but know when to use the new."
Navigating textual landscapes
Understanding how reading on paper is different from reading on screens requires some explanation of how the brain interprets written language. We often think of reading as a cerebral activity concerned with the abstract—with thoughts and ideas, tone and themes, metaphors and motifs. As far as our brains are concerned, however, text is a tangible part of the physical world we inhabit. In fact, the brain essentially regards letters as physical objects because it does not really have another way of understanding them. As Wolf explains in her book Proust and the Squid, we are not born with brain circuits dedicated to reading. After all, we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So the human brain improvises a brand-new circuit for reading by weaving together various regions of neural tissue devoted to other abilities, such as spoken language, motor coordination and vision.
Some of these repurposed brain regions are specialized for object recognition—they are networks of neurons that help us instantly distinguish an apple from an orange, for example, yet classify both as fruit. Just as we learn that certain features—roundness, a twiggy stem, smooth skin—characterize an apple, we learn to recognize each letter by its particular arrangement of lines, curves and hollow spaces. Some of the earliest forms of writing, such as Sumerian cuneiform, began as characters shaped like the objects they represented—a person's head, an ear of barley, a fish. Some researchers see traces of these origins in modern alphabets: C as crescent moon, S as snake. Especially intricate characters—such as Chinese hanzi and Japanese kanji—activate motor regions in the brain involved in forming those characters on paper: The brain literally goes through the motions of writing when reading, even if the hands are empty. Researchers recently discovered that the same thing happens in a milder way when some people read cursive.
Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text in which meaning is anchored to structure. The exact nature of such representations remains unclear, but they arelikely similar to the mental maps we create of terrain—such as mountains and trails—and of man-made physical spaces, such as apartments and offices. Both anecdotally and in published studies, people report that when trying to locate a particular piece of written information they often remember where in the text it appeared. We might recall that we passed the red farmhouse near the start of the trail before we started climbing uphill through the forest; in a similar way, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett on the bottom of the left-hand page in one of the earlier chapters.
In most cases, paper books have more obvious topography than onscreen text. An open paperback presents a reader with two clearly defined domains—the left and right pages—and a total of eight corners with which to orient oneself. A reader can focus on a single page of a paper book without losing sight of the whole text: one can see where the book begins and ends and where one page is in relation to those borders. One can even feel the thickness of the pages read in one hand and pages to be read in the other. Turning the pages of a paper book is like leaving one footprint after another on the trail—there's a rhythm to it and a visible record of how far one has traveled. All these features not only make text in a paper book easily navigable, they also make it easier to form a coherent mental map of the text.
In contrast, most screens, e-readers, smartphones and tablets interfere with intuitive navigation of a text and inhibit people from mapping the journey in their minds. A reader of digital text might scroll through a seamless stream of words, tap forward one page at a time or use the search function to immediately locate a particular phrase—but it is difficult to see any one passage in the context of the entire text. As an analogy, imagine if Google Maps allowed people to navigate street by individual street, as well as to teleport to any specific address, but prevented them from zooming out to see a neighborhood, state or country. Although e-readers like the Kindle and tablets like the iPad re-create pagination—sometimes complete with page numbers, headers and illustrations—the screen only displays a single virtual page: it is there and then it is gone. Instead of hiking the trail yourself, the trees, rocks and moss move past you in flashes with no trace of what came before and no way to see what lies ahead.
"The implicit feel of where you are in a physical book turns out to be more important than we realized," says Abigail Sellen of Microsoft Research Cambridge in England and co-author of The Myth of the Paperless Office. "Only when you get an e-book do you start to miss it. I don't think e-book manufacturers have thought enough about how you might visualize where you are in a book."
At least a few studies suggest that by limiting the way people navigate texts, screens impair comprehension. In a study published in January 2013 Anne Mangen of the University of Stavanger in Norway and her colleagues asked 72 10th-grade students of similar reading ability to study one narrative and one expository text, each about 1,500 words in length. Half the students read the texts on paper and half read them in pdf files on computers with 15-inch liquid-crystal display (LCD) monitors. Afterward, students completed reading-comprehension tests consisting of multiple-choice and short-answer questions, during which they had access to the texts. Students who read the texts on computers performed a little worse than students who read on paper.
Based on observations during the study, Mangen thinks that students reading pdf files had a more difficult time finding particular information when referencing the texts. Volunteers on computers could only scroll or click through the pdfs one section at a time, whereas students reading on paper could hold the text in its entirety in their hands and quickly switch between different pages. Because of their easy navigability, paper books and documents may be better suited to absorption in a text. "The ease with which you can find out the beginning, end and everything inbetween and the constant connection to your path, your progress in the text, might be some way of making it less taxing cognitively, so you have more free capacity for comprehension," Mangen says.
Supporting this research, surveys indicate that screens and e-readers interfere with two other important aspects of navigating texts: serendipity and a sense of control.People report that they enjoy flipping to a previous section of a paper book when a sentence surfaces a memory of something they read earlier, for example, or quickly scanning ahead on a whim. People also like to have as much control over a text as possible—to highlight with chemical ink, easily write notes to themselves in the margins as well as deform the paper however they choose.
Because of these preferences—and because getting away from multipurpose screens improves concentration—people consistently say that when they really want to dive into a text, they read it on paper. In a 2011 survey of graduate students at National Taiwan University, the majority reported browsing a few paragraphs online before printing out the whole text for more in-depth reading. A 2008 survey of millennials (people born between 1980 and the early 2000s) at Salve Regina University in Rhode Island concluded that, "when it comes to reading a book, even they prefer good, old-fashioned print". And in a 2003 study conducted at the National Autonomous University of Mexico, nearly 80 percent of 687 surveyed students preferred to read text on paper as opposed to on a screen in order to "understand it with clarity".
Surveys and consumer reports also suggest that the sensory experiences typically associated with reading—especially tactile experiences—matter to people more than one might assume. Text on a computer, an e-reader and—somewhat ironically—on any touch-screen device is far more intangible than text on paper. Whereas a paper book is made from pages of printed letters fixed in a particular arrangement, the text that appears on a screen is not part of the device's hardware—it is an ephemeral image. When reading a paper book, one can feel the paper and ink and smooth or fold a page with one's fingers; the pages make a distinctive sound when turned; and underlining or highlighting a sentence with ink permanently alters the paper's chemistry. So far, digital texts have not satisfyingly replicated this kind of tactility (although some companies are innovating, at least with keyboards).
Paper books also have an immediately discernible size, shape and weight. We might refer to a hardcover edition of War and Peace as a hefty tome or a paperback Heart of Darkness as a slim volume. In contrast, although a digital text has a length—which is sometimes represented with a scroll or progress bar—it has no obvious shape or thickness. An e-reader always weighs the same, regardless of whether you are reading Proust's magnum opus or one of Hemingway's short stories. Some researchers have found that these discrepancies create enough "haptic dissonance" to dissuade some people from using e-readers. People expect books to look, feel and even smell a certain way; when they do not, reading sometimes becomes less enjoyable or even unpleasant. For others, the convenience of a slim portable e-reader outweighs any attachment they might have to the feel of paper books.
Exhaustive reading
Although many old and recent studies conclude that people understand what they read on paper more thoroughly than what they read on screens, the differences are often small. Some experiments, however, suggest that researchers should look not just at immediate reading comprehension, but also at long-term memory. In a 2003 study Kate Garland of the University of Leicester and her colleagues asked 50 British college students to read study material from an introductory economics course either on a computer monitor or in a spiral-bound booklet. After 20 minutes of reading Garland and her colleagues quizzed the students with multiple-choice questions. Students scored equally well regardless of the medium, but differed in how they remembered the information.
Psychologists distinguish between remembering something—which is to recall a piece of information along with contextual details, such as where, when and how one learned it—and knowing something, which is feeling that something is true without remembering how one learned the information. Generally, remembering is a weaker form of memory that is likely to fade unless it is converted into more stable, long-term memory that is "known" from then on. When taking the quiz, volunteers who had read study material on a monitor relied much more on remembering than on knowing, whereas students who read on paper depended equally on remembering and knowing. Garland and her colleagues think that students who read on paper learned the study material more thoroughly more quickly; they did not have to spend a lot of time searching their minds for information from the text, trying to trigger the right memory—they often just knew the answers.
Other researchers have suggested that people comprehend less when they read on a screen because screen-based reading is more physically and mentally taxing than reading on paper. E-ink is easy on the eyes because it reflects ambient light just like a paper book, but computer screens, smartphones and tablets like the iPad shine light directly into people's faces. Depending on the model of the device, glare, pixilation and flickers can also tire the eyes. LCDs are certainly gentler on eyes than their predecessor, cathode-ray tubes (CRT), but prolonged reading on glossy self-illuminated screens can cause eyestrain, headaches and blurred vision. Such symptoms are so common among people who read on screens—affecting around 70 percent of people who work long hours in front of computers—that the American Optometric Association officially recognizes computer vision syndrome.
Erik Wästlund of Karlstad University in Sweden has conducted some particularly rigorous research on whether paper or screens demand more physical and cognitive resources. In one of his experiments 72 volunteers completed the Higher Education Entrance Examination READ test—a 30-minute, Swedish-language reading-comprehension exam consisting of multiple-choice questions about five texts averaging 1,000 words each. People who took the test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper.
In another set of experiments 82 volunteers completed the READ test on computers, either as a paginated document or as a continuous piece of text. Afterward researchers assessed the students' attention and working memory, which is a collection of mental talents that allow people to temporarily store and manipulate information in their minds. Volunteers had to quickly close a series of pop-up windows, for example, sort virtual cards or remember digits that flashed on a screen. Like many cognitive abilities, working memory is a finite resource that diminishes with exertion.
Although people in both groups performed equally well on the READ test, those who had to scroll through the continuous text did not do as well on the attention and working-memory tests. Wästlund thinks that scrolling—which requires a reader to consciously focus on both the text and how they are moving it—drains more mental resources than turning or clicking a page, which are simpler and more automatic gestures. A 2004 study conducted at the University of Central Florida reached similar conclusions.
Attitude adjustments
An emerging collection of studies emphasizes that in addition to screens possibly taxing people's attention more than paper, people do not always bring as much mental effort to screens in the first place. Subconsciously, many people may think of reading on a computer or tablet as a less serious affair than reading on paper. Based on a detailed 2005 survey of 113 people in northern California, Ziming Liu of San Jose State University concluded that people reading on screens take a lot of shortcuts—they spend more time browsing, scanning and hunting for keywords compared with people reading on paper, and are more likely to read a document once, and only once.
When reading on screens, people seem less inclined to engage in what psychologists call metacognitive learning regulation—strategies such as setting specific goals, rereading difficult sections and checking how much one has understood along the way. In a 2011 experiment at the Technion–Israel Institute of Technology, college students took multiple-choice exams about expository texts either on computers or on paper. Researchers limited half the volunteers to a meager seven minutes of study time; the other half could review the text for as long as they liked. When under pressure to read quickly, students using computers and paper performed equally well. When managing their own study time, however, volunteers using paper scored about 10 percentage points higher. Presumably, students using paper approached the exam with a more studious frame of mind than their screen-reading peers, and more effectively directed their attention and working memory.
Perhaps, then, any discrepancies in reading comprehension between paper and screens will shrink as people's attitudes continue to change. The star of "A Magazine Is an iPad That Does Not Work" is three-and-a-half years old today and no longer interacts with paper magazines as though they were touchscreens, her father says. Perhaps she and her peers will grow up without the subtle bias against screens that seems to lurk in the minds of older generations. In current research for Microsoft, Sellen has learned that many people do not feel much ownership of e-books because of their impermanence and intangibility: "They think of using an e-book, not owning an e-book," she says. Participants in her studies say that when they really like an electronic book, they go out and get the paper version. This reminds Sellen of people's early opinions of digital music, which she has also studied. Despite initial resistance, people love curating, organizing and sharing digital music today. Attitudes toward e-books may transition in a similar way, especially if e-readers and tablets allow more sharing and social interaction than they currently do. Books on the Kindle can only be loaned once, for example.
To date, many engineers, designers and user-interface experts have worked hard to make reading on an e-reader or tablet as close to reading on paper as possible. E-ink resembles chemical ink and the simple layout of the Kindle's screen looks like a page in a paperback. Likewise, Apple's iBooks attempts to simulate the overall aesthetic of paper books, including somewhat realistic page-turning. Jaejeung Kim of KAIST Institute of Information Technology Convergence in South Korea and his colleagues have designed an innovative and unreleased interface that makes iBooks seem primitive. When using their interface, one can see the many individual pages one has read on the left side of the tablet and all the unread pages on the right side, as if holding a paperback in one's hands. A reader can also flip bundles of pages at a time with a flick of a finger.
But why, one could ask, are we working so hard to make reading with new technologies like tablets and e-readers so similar to the experience of reading on the very ancient technology that is paper? Why not keep paper and evolve screen-based reading into something else entirely? Screens obviously offer readers experiences that paper cannot. Scrolling may not be the ideal way to navigate a text as long and dense as Moby Dick, but the New York Times, Washington Post, ESPN and other media outlets have created beautiful, highly visual articles that depend entirely on scrollingand could not appear in print in the same way. Some Web comics and infographicsturn scrolling into a strength rather than a weakness. Similarly, Robin Sloan has pioneered the tap essay for mobile devices. The immensely popular interactive Scale of the Universe tool could not have been made on paper in any practical way. New e-publishing companies like Atavist offer tablet readers long-form journalism with embedded interactive graphics, maps, timelines, animations and sound tracks. And some writers are pairing up with computer programmers to produce ever more sophisticated interactive fiction and nonfiction in which one's choices determine what one reads, hears and sees next.
When it comes to intensively reading long pieces of plain text, paper and ink may still have the advantage. But text is not the only way to read.
Scientific American
sábado, 4 de mayo de 2013
Historia de la primera página web
21:56
Juan MC Larrosa
No comments
El pequeño milagro en el escritorio de Tim
Hace no mucho, si empezabas un relato con la frase "En el pasado...", tu interlocutor sabía que hablabas, como mínimo, del Descubrimiento de América. O del período Jurásico.
Ya no es así. Ahora Colón está a la vuelta del almanaque y los dinosaurios se extinguieron antes de ayer. Mirá la Web, por ejemplo. No, no es que se haya extinguido. Todo lo contrario. Vivimos en un mundo Web. El correo electrónico, la búsqueda de una casa nueva, la compra y venta de casi cualquier producto o servicio, los pasajes y hoteles para un viaje soñado, Facebook y Wikipedia, docenas de jueguitos, la hora en París, el clima en el techo de tu casa y el techo de tu casa en Google Maps, todo eso está hoy en ese servicio de Internet al que llamamos la Web .
Aunque Internet y la Web no son lo mismo -ni remotamente-, si esta nación de casi 2500 millones de habitantes que es la Red tuviera un portal de entrada, sin duda sería una página Web. Es más: uno siente que la Web ha estado ahí siempre. Vamos, ¿cómo vivíamos antes?
La verdad es que muchos de nosotros pasamos una parte sustancial de nuestras vidas sin la Web (y sin Internet, para el caso). ¡En serio! De hecho, la Web cumplió el martes último, oficialmente, 20 años. Desde su creación son, en realidad, unos pocos más (digamos, 22), pero lleva 20 años entre nosotros, los ciudadanos de a pie.
Sólo 20 años. Y parecen 100.
Tim Berners-Lee |
Para la ocasión, la Organización Europea para la Investigación Nuclear (CERN) recuperó de sus archivos la primera página Web de la historia (http://info.cern.ch/hypertext/WWW/TheProject.html) , puesta en línea en agosto de 1991 por Tim Berners-Lee, que en marzo de 1989 había propuesto este sistema de hipertexto para la documentación del CERN y que en 1990, con la ayuda del belga Robert Cailliau, había logrado ponerlo en marcha.
El 30 de abril de 1993 el CERN publicó un documento en el que comunicaba formalmente que la Web pasaba al dominio público ( https://cds.cern.ch/record/1164399 ). Ese mismo año el Centro Nacional de Supercomputación de los Estados Unidos lanzó el primer navegador gráfico, llamadoMosaic , que no sólo será un impulsor fundamental de la expansión de la recién nacida Web, sino que pronto inspirará a Mark Andreessen, coautor de Mosaic , a fundar Netscape. Dos años después, en septiembre de 1995, llegarían las conexiones de Internet para particulares a la Argentina.
Ahora, si el sitio fundacional de la Web te parece una antigüedad, entonces el primer servidor Web te va a dejar pasmado. Pista: un smartphone es alrededor de 90 veces más poderoso que aquella máquina histórica, la computadora donde funcionó el primer servidor HTTP.
POR FAVOR, NO APAGAR
¿Qué es un servidor Web? Es la máquina que aloja y ofrece las páginas Web que vemos a diario. En otras palabras, cuando entramos a un sitio nuestra PC, el smartphone o la tablet hablan con otra computadora, a la que se denomina servidor Web o servidor HTTP , y que, por toda respuesta, enviará los datos que constituyen la página que queremos ver y que el Internet Explorer , el Chrome o el Firefox convertirán en los vistosos gráficos a los que estamos habituados.
Por supuesto, cuando tenés centenares de miles de personas visitando tu sitio no te alcanza con una computadora. Pero en junio de 1991 había un solo sitio Web, el de Tim Berners-Lee, alojado en su propia computadora. Sobre su escritorio.
Acá tienen una foto de esa máquina:http://upload.wikimedia.org/wikipedia/commons/d/d1/First_Web_Server.jpg.
Sí, es una NeXT, la empresa que fundó Steve Jobs tras ser expulsado de Apple en 1986. Las NeXT, lanzadas en 1988, no tuvieron éxito comercial, pero eran muy valoradas en el ambiente científico y técnico porque, para la época, resultaban muy avanzadas. Por añadidura, su software marcaría el paso de buena parte de la informática que disfrutamos hoy (más sobre esto enseguida).
Miremos un poco la foto. Se observa que el monitor era de tubo de rayos catódicos -las primeras pantallas LCD para computadoras de escritorio aparecerían a mediados de la década del '90- y, además, blanco y negro; ese detalle no se ve en esta foto, porque la pantalla está apagada, pero la base de la pantalla es característica de los display monocromáticos de las NeXT, un modelo destinado al diseño gráfico de diarios, libros y revistas. En este sitio hay, entre otras imágenes históricas, una captura de pantalla de la NeXT de Berners-Lee: http://info.cern.ch/www20/photos/
Sobre el teclado hay una copia del documento que describe el sistema de hipertexto propuesto por Berners-Lee en marzo de 1989. Pegado al frente del gabinete, a la derecha, se ve un cartel que inspira ternura, considerando lo que vino después. Escrito a mano en marcador rojo, reza: "Esta máquina es un servidor. ¡No apagar!"
No sólo era un servidor. Era el primero de su tipo, el primer servidor Web. Se llamaba CERN httpd (la d es por daemon , la palabra que se usa en Unix para denominar los procesos en segundo plano) y fue discontinuado en 1996. Dato curioso: este software sufría el bug del 2000, el error que aquejó a muchos sistemas que asignaban dos dígitos para el año y que, por lo tanto, podían confundir el 2000 con 1900.
Avión a chorro
Como dije, las NeXT eran máquinas poderosas en su tiempo. Recuerdo haber visto avisos y reseñas de estos equipos. Uno sólo podía soñar con comprarse una NeXT. Es que, a valores de hoy, costaban unos 12.000 dólares. Además, no se comercializaban en la Argentina.
¿Y qué obtenías por ese dinero? (Recomiendo sentarse, antes de continuar.) Un microprocesador de 32 bits a 25 MHz (25 millones de ciclos por segundo), el 68030 de Motorola, que también era usado por las Apple II, por las Commodore Amiga y por equipos de Sun Microsystems. El 68030, que alcanzaba 18 MIPS (Millones de Instrucciones Por Segundo) no tenía capacidades aritméticas de coma flotante, así que las NeXT incorporaban un coprocesador matemático, el 68882.
¿Cómo se compara esto con la tecnología de hoy? El cerebro electrónico de mi smartphone es un chip de 64 bits con 4 núcleos que funciona a 1500 millones de ciclos por segundo; esto es, un reloj 60 veces más rápido que el del 68030. Medí sus MIPS (con la app CF-Bench , para Android) y el resultado fue 1620, unas 90 veces más que aquellas NeXT. Teóricamente, la diferencia debería ser mayor, en el orden de las 500 veces, pero aun con esta medición más conservadora el abismo es enorme. Si el límite de velocidad de las autopistas urbanas aumentara 90 veces, podrías viajar a 9000 kilómetros por hora. Casi 3 veces más que el récord de velocidad alcanzado por un avión tripulado. O 10 veces la velocidad de un avión de pasajeros. O, metros más, metros menos, la velocidad de una bala.
¿Y comparado con una PC de escritorio de última generación? Pongamos un Core i7 Extreme Edition de Intel, que supera los 147.000 MIPS. Eso es más de 8000 veces más rápido que la NeXT donde nació la Web. Es tanto más rápido que no tiene sentido hacer analogías con el mundo físico.
MEMORIAS DEL SILICIO
Las máquinas como la que Tim Berners-Lee tenía en su escritorio venían con 8 MB de memoria RAM, que podía expandirse a 16; sí, megas. Este teléfono que llevo en el bolsillo tiene entre 256 y 128 veces más RAM; mi PC tiene 1000 veces más memoria.
En su momento, sin embargo, 8 MB de RAM era un privilegio, con la mayoría de las computadoras personales de la época provistas de 1 MB de memoria o menos.
Como adelanté, otro detalle interesante de la computadora donde corrió el primer servidor Web es su software. Usaba el NeXTSTEP, un sistema operativo tipo Unix basado en el núcleo Mach de la Universidad de Carnegie Mellon. Ofrecía, además, el lenguaje de programación Objective-C, nacido a principios de la década del '80 de la mano de Brad Cox y Tom Love, que se propusieron añadirle al lenguaje C la orientación a objetos que era el sello de otro célebre lenguaje de la época, el Smalltalk.
¿Qué tiene que ver esto con la informática actual? Fijate: el NeXTSTEP se convirtió en el OS X, el sistema operativo de las actuales Mac, y en el iOS, el sistema de las iPad y los iPhone. Objective-C es el lenguaje que se usa hoy, aggiornado, para crear aplicaciones para las Mac, el iPhone y la iPad. De hecho, el entorno de desarrollo que traían las NeXT, llamada Project Builder , es abuela de la actual Xcode . Este conjunto de herramientas de software integradas trazó, pues, las bases de los modernos esquemas de tiendas de aplicaciones vinculadas a dispositivos.
Para quienes estén interesados en estas brillantes y fugaces estrellas de la tecnología que fueron las NeXT, aquí hay un sitio con mucha y buena información:http://www.kevra.org/TheBestOfNext/
Aunque en la foto no se distingue bien, la NeXT de Berners-Lee poseía un gabinete de magnesio en forma de cubo y de color negro, lo que le valió el mote de The Cube . Mote que Jobs adoptaría para la siguiente generación, que bautizaría NeXT Cube. Fabricada en 1990, venía de base con 16 MB de RAM, un procesador más potente (el 68040), disco duro de 400 MB y una ranura adicional para diskettes, cerca del borde superior.
Por supuesto, aparte del poder de cómputo está el asunto de la conectividad. Esos números simplemente se van de escala. Para no abundar y sólo como ejemplo, Google necesita 200.000 veces más ancho de banda que el de un hogar para ofrecer sus servicios.
***
Muchas grandes cosas han tenido inicios humildes. Pero pocas tan colosales y a la vez de arranque tan humilde como la Web, que 20 años después tiene no ya un solitario sitio alojado en una computadora de escritorio, sino más de 630 millones de sites operando en inmensas granjas de servidores.
Se ha vuelto tan grande que ya no cabe en una foto..