Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, 28 March 2011

Thursday, 24 March 2011

the most typical face on the planet - compositing images

I would have love to watch the process of compositing the image, it doesn't show in this short clip.

Briefly, this video is about the most typical face on the planet.

Here's what the article says:

"National Geographic Magazine released a video clip, below, showing the most "typical" human face on the planet as part of its series on the human racecalled "Population 7 billion."


The researchers conclude that a male, 28-year-old Han Chinese man is the most typical person on the planet. There are 9 million of them.

The image above is a composite of nearly 200,000 photos of men who fit that description.

Don't get used to the results, however. Within 20 years, the most typical person will reside in India."

You can check out the video below:


Thursday, 10 March 2011

another good article: Graphing Culture by James Williford

"The problem with the humanities,” Lev Manovich told me over a quick meal at a strip-mall sushi joint in La Jolla last January, “is that people tend to worry too much about what can’t be done, about mistakes, problems, as opposed to just going and doing something.”

It was almost 8:00 p.m. Manovich, a professor of visual arts at UC–San Diego, had already spent the better part of the day in faculty meetings, led a class of undergrads through a nearly three-hour session of Time- and Process-Based Digital Media II, attended an informational event hosted by Google, and caught up on the progress of several of his graduate and postdoctoral researchers. In a few minutes, he’d be on his way home to put the finishing touches on a work of video art that needed to be installed at a friend’s gallery in downtown San Diego the next day. Inertia, I thought—of the intellectual variety or any other—is simply not a part of this guy’s constitution. “Of course, visualizations can’t represent everything,” he continued. “Of course, there are limits, but let’s not spend all of our time talking about it—let’s go ahead and do it, let’s figure out what we can do, right?”

Lev Manovich in front of the HIPerSpace.
—Photo courtesy of Calit2

Manovich was expounding the merits of “cultural analytics”—which he inaugurated in a 2009 essay—“a new paradigm for the study, teaching, and public presentation of cultural artifacts, dynamics, and flows.” Inspired by both the explosion of media on the Internet in recent years (YouTube, Flickr, ARTstor) and the increasingly interactive nature of our everyday media experiences (browsing the Web, playing computer games, manipulating images in Photoshop), the general idea of cultural analytics is to apply data visualization and analysis techniques traditionally associated with the so-called hard sciences—graphing, mapping, diagramming, and so on—to the study of visual culture. The difference between Manovich’s essay and so many other attempts to outline potential intersections between new media and the humanities is that it was more of a report to the academy than a mere call to action.

Operating under the banner of the Software Studies Initiative, Manovich and a handful of other scholars at UCSD’s Center for Research in Computing and the Arts had already spent two years asking and taking practical steps toward answering such questions as, What can one do with the vast archives of cultural material now available, as we say, at the click of a button? Where might one begin a discussion of the aesthetic properties of the millions of user-generated videos posted online? How does one capture and summarize the dynamics of sixty-plus hours of video-game play?

The entire enterprise, they discovered early on, was not so straightforward as feeding new kinds of datasets into existing software systems and interpreting the results they spit out. As Manovich explained, there are too many assumptions and predetermined pigeonholes built into most scientific visualization technologies. “So, for example, you have some type of medical imaging technique that you use for distinguishing healthy cells from cancerous ones.” A great thing, of course, if you happen to be an oncologist or an oncologist’s patient. “But you don’t want to divide culture into a few small categories,” he said. “What’s interesting about culture is that the categories are continuous. Instead of using these techniques to reduce complexity, to divide data into a few categories, I want to map the complexity.”

Jeremy Douglass (of UCSD’s Software Studies Initiative) and Florian Wiencek (of Jacobs University, Bremen) explore an image set of over one million pages of manga. The visualization, which draws on 883 different titles, raises questions about the relations between genre, visual style, and target audience.
—Photo Courtesy of the Software Studies Initiative

Among a variety of approaches to cultural analytics that the Software Studies team has developed since 2007 is a new (though, Manovich was careful to point out, not entirely unprecedented) kind of visualization that they call “direct” or “non-reductive.” Unlike most visualizations, which use points, lines, and other graphical primitives to represent data by abstraction, direct visualizations use images of the actual cultural objects from which the data was derived in the first place. In other words, instead of creating a standard point-based scatter plot reflecting, say, the brightness of Mark Rothko’s paintings over time, a direct visualization of that dataset will show the same pattern by distributing images of the paintings themselves across the graph space. Beyond its obvious visual appeal, the method offers a number of practical advantages to the humanities scholar. Seeing the pattern in its original context “should allow you to notice all kinds of other patterns that are not necessarily represented in your measurements,” said Manovich. It also allows you to move quickly between close reading—focusing on a single Rothko painting—and what literary historian Franco Moretti has termed “distant reading”—viewing a whole set of Rothkos at once. And with a large enough collection of data, Manovich added, “you might even discover other ‘zoom levels’ from which you can look at culture that may not correspond to a book, an author, a group of authors, a historical period, but are equally interesting. You can slice through cultural data diagonally in all kinds of ways.”

The day before, Manovich and Jeremy Douglass, a Software Studies postdoc with a background in literary theory and game studies, had shown me around some of the team’s recent data-slicing projects. They took me to the second floor of UCSD’s Atkinson Hall, where, at the edge of a large, cubicled workspace, the university keeps one of the highest resolution display systems in the world—the 238.5 square foot, 286.7 megapixel Highly Interactive Parallelized Display Space, or HIPerSpace, for short.

The first visualization they loaded was one of their simplest, but also most striking: a montage of every Time magazine cover published between March 3, 1923, (the first issue) and September 14, 2009—4,535 images in all. Laid out chronologically, beginning at the upper lefthand corner with a cover featuring Illinois Congressman Joseph G. Cannon and ending at the lower right with Jay Leno, the series immediately reveals certain patterns in magazine’s stylistic evolution: the shrinking of a decorative white border, the gradual transition from black-and-white to color printing, long periods in which certain hues come to dominate, and so on. From a distance, it’s a bit like looking at the annual growth rings of some felled ancient tree—except that instead of simply indicating a history of local climate conditions, the patterns here raise questions about the broader milieu in which the changes took place, about the nature of visual style, and perhaps even about the very idea of historical patterns. Visualization “makes the job of our visual system easier,” said Manovich, “but it’s not going to explain a pattern. It confronts you with something you wouldn’t notice otherwise, confronts you with new cultural facts. You see things that, probably, nobody has noticed before, new cultural patterns that you now have to explain.”

Another, slightly more sophisticated, approach to the same dataset—distributing the Time covers horizontally by date of publication and vertically according to relative color saturation levels (basically, vividness)—brought to light a number of “outliers,” data points that do not conform to the general pattern or range of the set as a whole. These, Douglass suggested, can help draw you into traditional close readings, but from angles that you might not have anticipated. “One cover out of forty-five hundred, a cover that might not have been significant to you if you were just searching indexes of Time or thinking about a particular topic, suddenly becomes significant in the context of a large historical or design system. It’s not that you would have known that it was important ahead of time—it’s not like, ‘Oh, this is the Finnegans Wake of Time covers’—it’s that, contextually, it’s significant. And when I dive into the visualization, I can actually see what these extremely saturated covers depict.”

A visualization created by undergraduate students in UCSD’s visual art program.
—Photo courtesy of the Software Studies Initiative.

As it turned out—and this seemed to come as a surprise to Manovich and Douglass—quite a few of those saturated outliers were covers that dealt in one way or another with communism. “Pure red, pure binary enemy, right?” laughed Manovich, who grew up in the Soviet Union during what he described as “the last stages of a decaying so-called Communist society.”

“You can think of each of these visualization techniques as a different photograph of your data,” said Manovich, “as photographs of your data taken from different points of view. If I were taking a photograph of Eduardo”—Eduardo Navas, another Software Studies researcher, who had joined us for the HIPerSpace demonstration—“I could take it from the front or from the side. I will notice in both photographs some of the same patterns, some of the same proportions, but each photograph will also give me access to particular information, each point of view will give me additional insights.”

When Douglass called up a visualization from another project, one in which the team analyzed and, in various ways, graphed over one million pages of manga (Japanese comics), Manovich remarked, “This is kind of our pièce de résistance.” It was easy to see why. Technically, the x-axis reflects standard deviation, and the y-axis, entropy—a configuration that results in the most detailed of the scanned pages, the ones with the most pictorial elements and intricate textures, appearing along the upper right curve of the graph, while the simplest, those dominated by either black or white space, trail off toward the lower left. But what most impressed me was the strangely immersive aesthetic experience it produced. What at first looked to me like a bell-shaped white blob set against a grey background, resolved, as Douglass zoomed in and around the visualization, into a complex field of truly startling density—pages from different comics, representing different genres, drawn by different artists, so crowded together that they overlap, seeming almost to compete with one another over the available space. This, it occurred to me, is about as close as I’ll ever get to stepping into one of Andreas Gursky’s photographs.

The team is aware of the artistic side of their work. In fact, their visualizations have been shown at a number of art and design venues, including the Graphic Design Museum in the Netherlands, the Gwangju Biennale in South Korea, and, last fall, at one of UCSD’s own exhibition spaces, the gallery@calit2. “For many people who enjoyed the show” at UCSD, Douglass said, “it was just about the visualization as a kind of spectacle and object of desire. They may not care about manga at all, may have no desire to read manga; but the idea that manga has a shape, or the idea that it’s all in one place—it’s a dream of flying over a landscape. It’s not about wanting to live there, it’s just the fact that you’re flying that’s so compelling.

“But,” he added, “that’s not my relationship to what I’m doing. I don’t spend half my time trying to make technical illustrations and half my time trying to create beautiful sculptures. I just move back and forth seamlessly through the space, and often don’t worry about which one I’m doing. I’ve noticed that some people will say, ‘That’s not an information visualization, that’s an artwork,’ and some people will say, ‘This is totally schematic, you did this procedurally and it’s just informative.’ But people who don’t have that kind of disciplinary anxiety just say, ‘That’s beautiful and interesting.’”

“Part of what I am trying to do,” Manovich said, “is to find visual forms in datasets which do not simply reveal patterns, but reveal patterns with some kind of connotational or additional meanings that correspond to the data. But partly, with something like the Time covers, I’m also trying to create an artistic image of history.”

James Williford is an editorial assistant at HUMANITIES and a graduate student at Georgetown University.

The Software Studies Initiative has received two grants from NEH. The first, a Humanities High-Performance Computing grant of $7,969, was used to analyze the visual properties of various cultural datasets at the National Energy Research Scientific Computing Center. The Software Studies team is currently using a Digital Humanities Start-Up grant of $50,000 to develop a version of their visualization software that will run on PC and Mac computers.

">
">


Friday, 25 February 2011

end of distress

I finally managed to figure which software would be the best for Veejaying, mind you, I need it to be user friendly. Earlier I have managed to get a copy of max msp, the mere sight of the interface freaked me out, the platform was so complicated I couldn't even deal with opening a canvas and start doing anything! nothing...
So today, I found out on this website: http://thetechnofile.com/2010/01/12/the-technofile-awards-2010/225/ that the best VJing software for 2010 was Resolume Avenue 3.
I downloaded the trial version, and here we are, with a decent platform! something familiar! no patches and code work!
I already tried a bit, I am managing, by the time the show comes, I would be pro in this hopefully!



Tuesday, 25 January 2011

sony handycam

My camera (an old sony handycam) failed on me few weeks ago, I tried to resurrect it since I have endless footage shot using it, and I also like the grainy-ish effect of the image it creates. I have managed to make it work again, only I am sure it won't last forever, hopefully until my project ends, otherwise, I will have to mix old footage with different quality ones, which becomes a sort of a statement that I need to consider and think about.

Maybe mixing footage from different cameras could be a genuine way of dealing with the rapidly - ever changing image qualities nowadays.

Thursday, 13 January 2011

Duchamp Land and Turing Land

In the below paragraph, taken from the following link http://rhizome.org/discuss/view/28877, Lev Manovich refers "to art world -- galleries, major museums, prestigious art journals -- as Duchamp-land, in analogy with Disneyland. I will also refer to the world of computer arts, as exemplified by ISEA, Ars Electronica, SIGGRAPH art shows, etc. as Turing-land."

According to Manovich, Duchamp Land (the contemporary art world) requires art objects that are “oriented towards the 'content'”, “complicated” and that share an “ironic, self-referential, and often literally destructive attitude towards its material”; on the other hand, Turing Land (the New Media Art world) is oriented “towards new, state-of-the-art computer technology,” and produces artworks that are “simple and usually lacking irony” and that “take technology which they use always seriously.”

MyLifeBits

I was going through the theme biography as a narrative form in the digital realm, I was surprised to find out about this project.
I am pasting the article as is below since it contains some external great links. Vannevar Bush, the inspiration behind this whole project was part of my part, I am glad things are making sense and I am able to find things connecting.

-------------
MyLifeBits

MylifeBits is a lifetime store of everything. It is the fulfillment of Vannevar Bush’s 1945 Memex vision including full-text search, text & audio annotations, and hyperlinks.

Total Recall is coming out this September. This book is the culimation of our thoughts regarding MyLifebits and the larger CARPE research agenda. Stay up to date at the Total Recall blog.

There are two parts to MyLifeBits: an experiment in lifetime storage, and a software research effort.

The experiment:Gordon Bell has captured a lifetime's worth of articles, books, cards, CDs, letters, memos, papers, photos, pictures, presentations, home movies, videotaped lectures, and voice recordings and stored them digitally. He is now paperless, and is beginning to capture phone calls, IM transcripts, television, and radio.

The software research:Jim Gemmell and Roger Luederhave developed the MyLifeBits software, which leverages SQL server to support: hyperlinks, annotations, reports, saved queries, pivoting, clustering, and fast search. MyLifeBits is designed to make annotation easy, including gang annotation on right click, voice annotation, and web browser integration. It includes tools to record web pages, IM transcripts, radio and television. The MyLifeBits screensaver supports annotation and rating. We are beginning to explore features such as document similarity ranking and faceted classification. We have collaborated with the WWMX team to get a mapped UI, and with the SenseCam team to digest and display SenseCam output.

Support for academic research: Our team led the 2005 Digital Memories (Memex) RFP, which supported 14 univerities and led to an impressive list of publications. We also established the ACM CARPEWorkshops: CARPE 2004 CARPE 2005 CARPE 2006

Watch our demo videos

Papers

Presentations

  • Gordon Bell's SIGMOD Keynote (June 14, 2005): MyLifeBits, A Transaction Processing Database for Everything Personal. The talk included project history, demonstration screens, architecture, size and shape of the Bell database (200,000 items, 100 GBytes), and research challenges for the database community. PowerPoint (22 MB)
  • Jim Gemmell's MyLifeBits talk given at a number of universities: Feb 2005 version PowerPoint (10 MB)
  • Gordon Bell's talk, given at BayCHI, on 11 February 2003 at PARC, Palo Alto (4.8 MByte PPT) and U.S. Naval Post Graduate School, Monterey on 6 February 2003.
  • MyLifeBits: A lifetime personal store beginning at 1:22. Streaming webcast of Bell by Austrian Telecom at Austria's European (Technology) Forum Alpbach, Plenary Session speaker, "The World of Tomorrow", held Thursday 26 August 2004. See also thePowerPoint presentation (approx. 10 MB).

MyLifeBits In The News

Du sollst nicht vergessen, Der Spiegel, 4/14/2008
Total Recall: Storing every life memory in a surrogate brain, ComputerWorld, 4/2/2008
Don't forget to back up your brain, Fox News, 11/14/2007
Remember This?, The New Yorker, May 28, 2007
Total recall becomes a reality, The Telegraph, 4/21/2007
Your Whole Life is Going to Bits, Sydney Morning Herald 4/14/2007
Researcher Records His Life On Computer, CBS Evening News 4/9/2007
Perfect Memory,WATTnow, March 2007
Lifeblogging: Is a virtual brain good for the real one?Ars technica, 2/7/2007
On the Record, All the Time, Chronicle of Higher Education, 2/4/2007
Digital Diary, San Francisco Chronicle, 1/28/2007
The Persistence of Memory, NPR Radio "On the Media" show, 1/5/2007
How Microsoft’s Gordon Bell is Reengineering Human Memory (and Soon, Your Life and Business), Fast Company, Nov 2006.
Digital age may bring total recall in future, CNN 10/16/2006.
El hombre que guarda todos los recuerdos de su vida en bits, La Crónica de Hoy (Mexico), 7/16/2006.
That's My Life, Aria Magazine April 2006.
The ultimate digital diary The Dominion Post 5/31/2006
In 2021 You'll Enjoy Total Recall Popular Science 5/18/2006
The Memory Machine, Varsity.co.uk, 3/2/2006
Life Bytes, NPR Radio "Living on Earth" show, 1/20/2006
The man with the perfect memory - just don't ask him to remember what's in it The Guardian, 12/28/2005
Bytes of my life, Hindustan Times, 11/17/2005
Total Recall, IEEE Spectrum, 11/1/2005 Podcast on IEEE Spectrum Radio (Choose arrow on October 2005 show and select "MyLifeBits -- the digitized life of Gordon Bell")
Turning Your Life Into Bits, Indexed, Los Angeles Times 7/11/2005
Wouldn't It Be Nice The Wall Street Journal 5/23/2005
Life Bits IEEE Spectrum Online May 2005
How To Be A Pack Rat, Forbes.com 4/29/2005 - see alsoblog entry by Thomas Hawk at eHomeUpgrade
Computer sage cuts paperwork, converts his life to digital format The Seattle Time 4/9/2005
Channel 9 video interviews 8/21/2004IntroGemmellLueder
Slices of Life Spiked-Online 8/19/2004
Next-generation search tools to refine results CNET 8/9/2004
Life in byte-sized pieces The Age, 7/18/2004
Removable Media For Our Minds TheFeature 3/25/2004
This is Your Life San Jose Mercury News 3/6/2004
Navigating Digital Home Networks New York Times 2/19/2004
Offloading Your Memories New York Times MagazineYear in Ideas issue 12/14/2003 "Bright notions, bold inventions, genius schemes and mad dreams that took off (or tried to) in 2003"
Logged on for life Toronto Star 9/8/2003
This is your life--in bits U.S. News & World Report 6/23/2003
My Life in a Terabyte IT-Analysis.com 5/14/2003
How MS will know ALL about you ZD AnchorDesk 4/18/2003
Memories as Heirlooms Logged Into a Database The New York Times 3/20/2003
Microsoft Fair Forecasts Future AP 2/27/2003 (This story ran on many newspapers and news sites, including USA Today, The Globe and Mail, The San Jose Mercury News, and ABC News)
This Is Your Brain on Digits ABC News 2/5/2003
A life in bits and bytes c|net News.com 1/6/2003 (run also by ZDNet)|
Your Life - On The Web Computer Research & Technology 12/20/2002
Saving Your Bits for Posterity Wired 12/6/2002
Microsoft works to create back-up brain Knowledge Management 11/25/2002
Microsoft Creating Virtual Brain NewsFactor Network 11/22/2002
Microsoft solves "giant shoebox problem" Geek.com 11/22/2002
Would you put your life in Microsoft's hands?Silicon.com (run also by ZDNet News) 11/21/2002
Microsoft Plans Digital Memory Box, a Step Toward "Surrogate Brain" BetterHumans 11/21/2002
E-hoard with Microsoft's life database vnunet.com IT Week 11/21/2002
Microsoft plans online life archive BBC News 11/20/2002
Software aims to put your life on a disk New Scientist 11/20/2002

Related links

As We May Think, by Vannevar Bush, The Atlantic Monthly, 176(1), July 1945, 101-108.

Many more links can be found at the CARPE Research Community web site

Monday, 30 August 2010

Objects as clutter, as emotional and intellectual companions, as reflective of historical periods.

I have been reading about the cult of less, an initiative by a young software engineer, Kelly Sutton, to get rid of everything he owns by digitizing all his possessions, and then keeping the few cherished objects, selling the ones that could benefit other people and ship the rest of the goods he owns but don’t interest him.

In a BBC interview, Kelly is described as the '21st-Century minimalist’; he says he “got rid of much of his clutter because he felt the ever-increasing number of available digital goods have provided adequate replacements for his former physical possessions.” He got rid of most of his assets, apart from his iPad, Kindle, laptop and a few other items, replacing his actual records with mp3s, his photographs are now digitized and uploaded on Flickr, he credits his external hard drives and online services like Hulu, Facebook, Skype and Google Maps for allowing him to lead a minimalist life.

Other people seem to be following the same trend, that of dissolving the objects into digital data and living “clutter-less” and “light”.

In a further research, Sherry Turkle, current director of the MIT Initiative on Technology and Self, writes in “Evocative Objects” about the power everyday things, she cherishes objects as “emotional and intellectual companions that anchor memory, sustain relationships, and provoke new ideas. According to Turkle, “the simplest objects are shown to bring philosophy down to earth” and her idea of “evocative objects” goes far to say that objects carry both ideas and passion. And the role of objects she discusses in her book vary from design and play, to discipline and desire, history and exchange, mourning and memory, transition and passage, meditation and new vision.

On another note, I was at the British Museum few days ago and I got to see one of the shows that they have created along with BBC Radio 4 series. The show is “A history of the World in 100 objects”, the idea being a series of selected objects grouped together to give meaning to the historical context and make connections across the world. Each grouped series explore a common theme and portray a certain era.

So on Sherry Turkle’s list of describing the objects, I would add, objects as mapping tools, and objects as reflective of history, and objects as clutter (based on the British Museum experience and the Kelly Sutton cult).

Now art and design seem to be always reflective of the context, and here I figured so are objects.

Tuesday, 17 August 2010

The internet: is it changing the way we think?

A good debate about the internet.


The internet: is it changing the way we think?

American writer Nicholas Carr's claim that the internet is not only shaping our lives but physically altering our brains has sparked a lively and ongoing debate, says John Naughton. Below, a selection of writers and experts offer their opinion.

Google, internet
Are our minds being altered due to our increasing reliance on search engines, social networking sites and other digital technologies? Photograph: Chris Jackson/Getty Images

Every 50 years or so, American magazine the Atlantic lobs an intellectual grenade into our culture. In the summer of 1945, for example, it published an essay by the Massachusetts Institute of Technology (MIT) engineer Vannevar Bush entitled "As We May Think". It turned out to be the blueprint for what eventually emerged as the world wide web. Two summers ago, the Atlantic published an essay by Nicholas Carr, one of the blogosphere's most prominent (and thoughtful) contrarians, under the headline "Is Google Making Us Stupid?".

  1. The Shallows: How the Internet is Changing the Way We Think, Read and Remember
  2. by Nicholas Carr
  3. Buy it from the Guardian bookshop

"Over the past few years," Carr wrote, "I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going – so far as I can tell – but it's changing. I'm not thinking the way I used to think. I can feel it most strongly when I'm reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument and I'd spend hours strolling through long stretches of prose. That's rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle."

The title of the essay is misleading, because Carr's target was not really the world's leading search engine, but the impact that ubiquitous, always-on networking is having on our cognitive processes. His argument was that our deepening dependence on networking technology is indeed changing not only the way we think, but also the structure of our brains.

Carr's article touched a nerve and has provoked a lively, ongoing debate on the net and in print (he has now expanded it into a book, The Shallows: What the Internet Is Doing to Our Brains). This is partly because he's an engaging writer who has vividly articulated the unease that many adults feel about the way their modi operandi have changed in response to ubiquitous networking. Who bothers to write down or memorise detailed information any more, for example, when they know that Google will always retrieve it if it's needed again? The web has become, in a way, a global prosthesis for our collective memory.

It's easy to dismiss Carr's concern as just the latest episode of the moral panic that always accompanies the arrival of a new communications technology. People fretted about printing, photography, the telephone and television in analogous ways. It even bothered Plato, who argued that the technology of writing would destroy the art of remembering.

But just because fears recur doesn't mean that they aren't valid. There's no doubt that communications technologies shape and reshape society – just look at the impact that printing and the broadcast media have had on our world. The question that we couldn't answer before now was whether these technologies could also reshape us. Carr argues that modern neuroscience, which has revealed the "plasticity" of the human brain, shows that our habitual practices can actually change our neuronal structures. The brains of illiterate people, for example, are structurally different from those of people who can read. So if the technology of printing – and its concomitant requirement to learn to read – could shape human brains, then surely it's logical to assume that our addiction to networking technology will do something similar?

Not all neuroscientists agree with Carr and some psychologists are sceptical. Harvard's Steven Pinker, for example, is openly dismissive. But many commentators who accept the thrust of his argument seem not only untroubled by its far-reaching implications but are positively enthusiastic about them. When the Pew Research Centre's Internet & American Life project asked its panel of more than 370 internet experts for their reaction, 81% of them agreed with the proposition that "people's use of the internet has enhanced human intelligence".

Others argue that the increasing complexity of our environment means that we need the net as "power steering for the mind". We may be losing some of the capacity for contemplative concentration that was fostered by a print culture, they say, but we're gaining new and essential ways of working. "The trouble isn't that we have too much information at our fingertips," says the futurologist Jamais Cascio, "but that our tools for managing it are still in their infancy. Worries about 'information overload' predate the rise of the web... and many of the technologies that Carr worries about were developed precisely to help us get some control over a flood of data and ideas. Google isn't the problem – it's the beginning of a solution."

Sarah Churchwell, academic and critic

Is the internet changing our brains? It seems unlikely to me, but I'll leave that question to evolutionary biologists. As a writer, thinker, researcher and teacher, what I can attest to is that the internet is changing our habits of thinking, which isn't the same thing as changing our brains. The brain is like any other muscle – if you don't stretch it, it gets both stiff and flabby. But if you exercise it regularly, and cross-train, your brain will be flexible, quick, strong and versatile.

In one sense, the internet is analogous to a weight-training machine for the brain, as compared with the free weights provided by libraries and books. Each method has its advantage, but used properly one works you harder. Weight machines are directive and enabling: they encourage you to think you've worked hard without necessarily challenging yourself. The internet can be the same: it often tells us what we think we know, spreading misinformation and nonsense while it's at it. It can substitute surface for depth, imitation for originality, and its passion for recycling would surpass the most committed environmentalist.

In 10 years, I've seen students' thinking habits change dramatically: if information is not immediately available via a Google search, students are often stymied. But of course what a Google search provides is not the best, wisest or most accurate answer, but the most popular one.

But knowledge is not the same thing as information, and there is no question to my mind that the access to raw information provided by the internet is unparalleled and democratising. Admittance to elite private university libraries and archives is no longer required, as they increasingly digitise their archives. We've all read the jeremiads that the internet sounds the death knell of reading, but people read online constantly – we just call it surfing now. What they are reading is changing, often for the worse; but it is also true that the internet increasingly provides a treasure trove of rare books, documents and images, and as long as we have free access to it, then the internet can certainly be a force for education and wisdom, and not just for lies, damned lies, and false statistics.

In the end, the medium is not the message, and the internet is just a medium, a repository and an archive. Its greatest virtue is also its greatest weakness: it is unselective. This means that it is undiscriminating, in both senses of the word. It is indiscriminate in its principles of inclusion: anything at all can get into it. But it also – at least so far – doesn't discriminate against anyone with access to it. This is changing rapidly, of course, as corporations and governments seek to exert control over it. Knowledge may not be the same thing as power, but it is unquestionably a means to power. The question is, will we use the internet's power for good, or for evil? The jury is very much out. The internet itself is disinterested: but what we use it for is not.

Sarah Churchwell is a senior lecturer in American literature and culture at the University of East Anglia

Naomi Alderman, novelist

If I were a cow, nothing much would change my brain. I might learn new locations for feeding, but I wouldn't be able to read an essay and decide to change the way I lived my life. But I'm not a cow, I'm a person, and therefore pretty much everything I come into contact with can change my brain.

It's both a strength and a weakness. We can choose to seek out brilliant thinking and be challenged and inspired by it. Or we can find our energy sapped by an evening with a "poor me" friend, or become faintly disgusted by our own thinking if we've read too many romance novels in one go. As our bodies are shaped by the food we eat, our brains are shaped by what we put into them.

So of course the internet is changing our brains. How could it not? It's not surprising that we're now more accustomed to reading short-form pieces, to accepting a Wikipedia summary, rather than reading a whole book. The claim that we're now thinking less well is much more suspect. If we've lost something by not reading 10 books on one subject, we've probably gained as much by being able to link together ideas easily from 10 different disciplines.

But since we're not going to dismantle the world wide web any time soon, the more important question is: how should we respond? I suspect the answer is as simple as making time for reading. No single medium will ever give our brains all possible forms of nourishment. We may be dazzled by the flashing lights of the web, but we can still just step away. Read a book. Sink into the world of a single person's concentrated thoughts.

Time was when we didn't need to be reminded to read. Well, time was when we didn't need to be encouraged to cook. That time's gone. None the less, cook. And read. We can decide to change our own brains – that's the most astonishing thing of all.

Ed Bullmore, psychiatrist

Whether or not the internet has made a difference to how we use our brains, it has certainly begun to make a difference to how we think about our brains. The internet is a vast and complex network of interconnected computers, hosting an equally complex network – the web – of images, documents and data. The rapid growth of this huge, manmade, information-processing system has been a major factor stimulating scientists to take a fresh look at the organisation of biological information-processing systems like the brain.

It turns out that the human brain and the internet have quite a lot in common. They are both highly non-random networks with a "small world" architecture, meaning that there is both dense clustering of connections between neighbouring nodes and enough long-range short cuts to facilitate communication between distant nodes. Both the internet and the brain have a wiring diagram dominated by a relatively few, very highly connected nodes or hubs; and both can be subdivided into a number of functionally specialised families or modules of nodes. It may seem remarkable, given the obvious differences between the internet and the brain in many ways, that they should share so many high-level design features. Why should this be?

One possibility is that the brain and the internet have evolved to satisfy the same general fitness criteria. They may both have been selected for high efficiency of information transfer, economical wiring cost, rapid adaptivity or evolvability of function and robustness to physical damage.Networks that grow or evolve to satisfy some or all of these conditions tend to end up looking the same.

Although there is much still to understand about the brain, the impact of the internet has helped us to learn new ways of measuring its organisation as a network. It has also begun to show us that the human brain probably does not represent some unique pinnacle of complexity but may have more in common than we might have guessed with many other information-processing networks.

Ed Bullmore is professor of psychiatry at the University of Cambridge

Geoff Dyer, writer

Sometimes I think my ability to concentrate is being nibbled away by the internet; other times I think it's being gulped down in huge, Jaws-shaped chunks. In those quaint days before the internet, once you made it to your desk there wasn't much to distract you. You could sit there working or you could just sit there. Now you sit down and there's a universe of possibilities – many of them obscurely relevant to the work you should be getting on with – to tempt you. To think that I can be sitting here, trying to write something about Ingmar Bergman and, a moment later, on the merest whim, can be watching a clip from a Swedish documentary about Don Cherry – that is a miracle (albeit one with a very potent side-effect, namely that it's unlikely I'll ever have the patience to sit through an entire Bergman film again).

Then there's the outsourcing of memory. From the age of 16, I got into the habit of memorising passages of poetry and compiling detailed indexes in the backs of books of prose. So if there was a passage I couldn't remember, I would spend hours going through my books, seeking it out. Now, in what TS Eliot, with great prescience, called "this twittering world", I just google the key phrase of the half-remembered quote. Which is great, but it's drained some of the purpose from my life.

Exactly the same thing has happened now that it's possible to get hold of out-of-print books instantly on the web. That's great too. But one of the side incentives to travel was the hope that, in a bookstore in Oregon, I might finally track down a book I'd been wanting for years. All of this searching and tracking down was immensely time-consuming – but only in the way that being alive is time-consuming.

Colin Blakemore, neurobiologist

It's curious that some of the most vociferous critics of the internet – those who predict that it will produce generations of couch potatoes, with minds of mush – are the very sorts of people who are benefiting most from this wonderful, liberating, organic extension of the human mind. They are academics, scientists, scholars and writers, who fear that the extraordinary technology that they use every day is a danger to the unsophisticated.

They underestimate the capacity of the human mind – or rather the brain that makes the mind – to capture and capitalise on new ways of storing and transmitting information. When I was at school I learned by heart great swathes of poetry and chunks of the Bible, not to mention page after page of science textbooks. And I spent years at a desk learning how to do long division in pounds, shillings and pence. What a waste of my neurons, all clogged up with knowledge and rules that I can now obtain with the click of a mouse.

I have little doubt that the printing press changed the way that humans used their memories. It must have put out of business thousands of masters of oral history and storytelling. But our brains are so remarkably adept at putting unused neurons and virgin synaptic connections to other uses. The basic genetic make-up of Homo sapiens has been essentially unchanged for a quarter of a billion years. Yet 5,000 years ago humans discovered how to write and read; 3,000 years ago they discovered logic; 500 years ago, science. These revolutionary advances in the capacity of the human mind occurred without genetic change. They were products of the "plastic" potential of human brains to learn from their experience and reinvent themselves.

At its best, the internet is no threat to our minds. It is another liberating extension of them, as significant as books, the abacus, the pocket calculator or the Sinclair Z80.

Just as each of those leaps of technology could be (and were) put to bad use, we should be concerned about the potentially addictive, corrupting and radicalising influence of the internet. But let's not burn our PCs or stomp on our iPads. Let's not throw away the liberating baby with the bathwater of censorship.

Colin Blakemore is professor of neuroscience at the University of Oxford

Ian Goodyer, psychiatrist

The key contextual point here is that the brain is a social organ and is responsive to the environment. All environments are processed by the brain, whether it's the internet or the weather – it doesn't matter. Do these environments change the brain? Well, they could and probably do in evolutionary time.

The internet is just one of a whole range of characteristics that could change the brain and it would do so by altering the speed of learning. But the evidence that the internet has a deleterious effect on the brain is zero. In fact, by looking at the way human beings learn in general, you would probably argue the opposite. If anything, the opportunity to have multiple sources of information provides a very efficient way of learning and certainly as successful as learning through other means.

It is being argued that the information coming into the brain from the internet is the wrong kind of information. It's too short, it doesn't have enough depth, so there is a qualitative loss. It's an interesting point, but the only way you could argue it is to say that people are misusing the internet. It's a bit like saying to someone who's never seen a car before and has no idea what it is: "Why don't you take it for a drive and you'll find out?" If you seek information on the internet like that, there's a good chance you'll have a crash. But that's because your experience has yet to inculcate what a car is. I don't think you can argue that those latent processes are going to produce brain pathology.

I think the internet is a fantastic tool and one of the great wonders of the world, if not the greatest. Homo sapiens must just learn to use it properly.

Ian Goodyer is professor of psychiatry at the University of Cambridge

Maryanne Wolf, cognitive neuroscientist

I am an apologist for the reading brain. It represents a miracle that springs from the brain's unique capacity to rearrange itself to learn something new. No one, however, knows what this reading brain will look like in one more generation.

No one today fully knows what is happening in the brains of children as they learn to read while immersed in digitally dominated mediums a minimum of six to seven hours a day (Kaiser report, 2010). The present reading brain's circuitry is a masterpiece of connections linking the most basic perceptual areas to the most complex linguistic and cognitive functions, like critical analysis, inference and novel thought (ie, "deep reading processes"). But this brain is only one variation of the many that are possible. Therein lies the cerebral beauty and the cerebral rub of plasticity.

Understanding the design principles of the plastic reading brain highlights the dilemma we face with our children. It begins with the simple fact that we human beings were never born to read. Depending on several factors, the brain rearranges critical areas in vision, language and cognition in order to read. Which circuit parts are used depends on factors like the writing system (eg English v Chinese); the formation (eg how well the child is taught); and the medium (eg a sign, a book, the internet). For example, the Chinese reading brain requires more cortical areas involved in visual memory than the English reader because of the thousands of characters. In its formation, the circuit utilises fairly basic processes to decode and, with time and cognitive effort, learns to incorporate "deep reading processes" into the expert reading circuit.

The problem is that because there is no single reading brain template, the present reading brain never needs to develop. With far less effort, the reading brain can be "short-circuited" in its formation with little time and attention (either in milliseconds or years) to the deep reading processes that contribute to the individual reader's cognitive development.

The problem of a less potentiated reading brain becomes more urgent in the discussion about technology. The characteristics of each reading medium reinforce the use of some cognitive components and potentially reduce reliance on others. Whatever any medium favours (eg, slow, deep reading v rapid information-gathering) will influence how the reader's circuit develops over time. In essence, we human beings are not just the product of what we read, but how we read.

For me, the essential question has become: how well will we preserve the critical capacities of the present expert reading brain as we move to the digital reading brain of the next generation? Will the youngest members of our species develop their capacities for the deepest forms of thought while reading or will they become a culture of very different readers – with some children so inured to a surfeit of information that they have neither the time nor the motivation to go beyond superficial decoding? In our rapid transition into a digital culture, we need to figure out how to provide a full repertoire of cognitive skills that can be used across every medium by our children and, indeed, by ourselves.

Maryanne Wolf is the author of Proust and the Squid: The Story and Science of the Reading Brain, Icon Books, 2008

Bidisha, writer and critic

The internet is definitely affecting the way I think, for the worse. I fantasise about an entire month away from it, with no news headlines, email inboxes, idle googling or instant messages, the same way retirees contemplate a month in the Bahamas. The internet means that we can never get away from ourselves, our temptations and obsessions. There's something depressing about knowing I can literally and metaphorically log on to the same homepage, wherever I am in the world.

My internet use and corresponding brain activity follow a distinct pattern of efficiency. There's the early morning log-on, the quick and accurate scan of the day's news, the brisk queries and scheduling, the exchange of scripts of articles or edited book extracts.

After all this good stuff, there's what I call the comet trail: the subsequent hours-long, bitty, unsatisfying sessions of utter timewasting. I find myself looking up absolute nonsense only tangentially related to my work, fuelled by obsessions and whims and characterised by topic-hopping, bad spelling, squinting, forum lurking and comically wide-ranging search terms. I end having created nothing myself, feeling isolated, twitchy and unable to sleep, with a headache and painful eyes, not having left the house once.

The internet enables you look up anything you want and get it slightly wrong. It's like a never-ending, trashy magazine sucking all time, space and logic into its bottomless maw. And, like all trashy magazines, it has its own tone, slang and lexicon. I was tempted to construct this piece in textspeak, Tweet abbreviations or increasingly abusive one-liners to demonstrate the level of wit the internet has facilitated – one that is frighteningly easily to mimic and perpetuate. What we need to counteract the slipshod syntax, off-putting abusiveness, unruly topic-roaming and frenetic, unreal "social networking" is good, old-fashioned discipline. We are the species with the genius to create something as wondrous as the internet in the first place. Surely we have enough self-control to stay away from Facebook.