New York Times Imagines Microsoft Surface As the Future of Newspaper Reading

A article by Megan Garber of the Nieman Journalism Lab shows how the New York Times Company R&D Lab is using Microsoft Surface to envision the not to distant future where reading the newspaper on a tabletop is commonplace.According to the article, they are "betting breakfast will be less about sharing out newsprint and more […]

A article by Megan Garber of the Nieman Journalism Lab shows how the New York Times Company R&D Lab is using Microsoft Surface to envision the not to distant future where reading the newspaper on a tabletop is commonplace.

According to the article, they are "betting breakfast will be less about sharing out newsprint and more about swiping through stories, ambient commerce, and the quantified self."

Here's the transcript:

So the first thing you'll notice here is that we have changed the way that the layout works. We've gotten rid of that sort of broadsheet design of columns and headlines in favor of a more tactile experience. Working at a table, you expect to be able to manipulate physical objects. So what we've represented each section as is a stack of the photographs for each of the articles that live therein.

If you're looking for a section and you can't find it here, you can scroll each of these -- they're a scrollable column -- and the idea here is that you can share this space, as well. You might be sitting across from someone and sharing the paper with them. So you can turn these columns so that they're facing the opposite direction.

What I can then do is open up any of these sections and pull up a carousel of articles. As you can see, we've left space for advertising, again, to work with our partners and continue to make this a viable business. And then what we can do, when we open them up, is page through them just like any other reader application. You can just swipe to the next page. And if you're here by yourself, you can unfold the paper as you would with a regular paper and take up a little more room here at the table.

Here in the reader view, though, the photography tends to take a bit of a backseat as compared to the navigation. So what we've been able to do is tap on the photos, and for any article, the photos sort of spring out of the template. And now we can take them, move them around, scale them up, and show them to our partner across the table.

And then once we've done with the article, the photos themselves can continue to live on in this space, making the table a little messy and a little more playful.

In addition to that, we wanted to make sure we kept those social features of being able to share an article and send it to a friend. So what I can do there is, again, open up a carousel, pick something, I can leave a note on any of these different areas … so we can leave little notes on any of these articles. Here, just sort of a quick "did you see this?" typed hastily, with typos. And then what I can do is, I can share that. I can share that with people who work here at the Lab, or at Facebook, or on Twitter.

But then that begs the question: How can I announce my presence to the table? What's my feature for logging in? It didn't seem right to be able to walk up and type in a login, or have it scan your hand, necessarily. But, you know, typically, when you get home at the end of the day, you throw your keys, your phone, your bag onto the kitchen table. That gives us an opportunity to use this as an opportunity to recognize that I'm here. I put my phone down, I get these little red radials coming out of it. And now I'm presented with a list of those articles that have been shared with me. I can tap that last one that's been left, and then it comes right back. I can take a look at that and see what's been left for me. And that could have naturally spawned some sort of an alert on my phone, or on my laptop at work, in a couple of different ways.

And certainly we can have the table react to other objects, as well -- you know, it is a table, first and foremost. So you might be eating your breakfast or having a cup of coffee. That gives us an opportunity to be a little playful with the ad experience, as well.

So then the next thing that we do here with this table surface is to talk a little bit about the way we think technology, particularly consumer technology, will be changing the experience of consuming news and creating news. So what we've done here is to use this application as an opportunity to learn a bit more about different devices. I can take a device -- for example, this is a 3-D camera from Panasonic -- and rather than describing all of its features, and providing more of a view into that, I can use the table as a tool for this portion of the presentation. I get a price tag that shows what kind of device this is, what its model number is, a range of prices that we were able to find on the Internet, as well as a range of reviews that we found.

And then we can attach content to it, as well. For example, this is an article that was written back in January that compared the Panasonic camera here to another Sony camera that was similar and came out at the same time. And it shows us, in a couple of different ways, how New York Times content will be finding its way into experiences that we don't necessarily own or control. And that's by design.

We'll be doing a lot of work in tagging our articles with different locations, or people, or concepts, and opening that up to APIs and developers where they can build them into their own experiences. So this is one example of where that might happen.

And then the last thing I'll show you, here, is, to the question of, "Great, we've got all these devices in here; how do we as people begin to interact, as well?" -- well, we're instrumenting ourselves increasingly. For example, this is the docking station for a FitBit. (I'll get that off out of your way.) I've been wearing one of these for a while here in the Lab -- it's just a simple electronic pedometer. It'll track how many steps you've taken, and it'll use this dock to sync it up or to charge it and bring that information into a service that shows you how much activity you've had over the course of the last few days, done in 15-minute intervals.

The trouble, though, is that this kind of experience of sitting at the table may not be where you want to be presented with that information. But there are some places within the house where that kind of data and context makes a lot more sense. So, for example, getting ready in the morning. You might be weighing yourself, checking out your figure, seeing if your clothes fit really well. Presenting you with this kind of information might provide you with a sort of behavioral cue.

So what we wanted to do is build that experience. And unlike the table here, where we were able to use a commercial product, we actually had to build that ourselves. So we built a "magic mirror," which we'll show you next.

About The Author

Deepak Gupta is a IT & Web Consultant. He is the founder and CEO of & DIT Technologies, where he's engaged in providing Technology Consultancy, Design and Development of Desktop, Web and Mobile applications using various tools and softwares. Sign-up for the Email for daily updates. Google+ Profile.