sekai camera shibuya station

There’s been much made of Tonchidot‘s Sekai Camera, one of the first Augmented Reality iPhone apps to allow users to add their own content to the virtual-world database powering it.

And rightly so. Whilst Augmented Reality has been around for a long time (starting out in the military), this is the first time that it’s been made available to consumers without requiring specialist hardware. All you need is Japan’s best selling mobile handset, the iPhone.

We recently tried Sekai Camera out on our 3GS, and were pretty impressed by what we saw.

The iPhone’s GPS is used to locate nearby airtags, with the built-in compass figuring out what to direction your facing to only display relevant tags. The tags constantly wobble around in mid-air as you move (3G users who don’t have the compass can manually scroll through north/south/east/west, but should upgrade to the 3GS for ease of use and overall sex appeal).

First off then, we powered up Sekai Camera opposite Shibuya Station. As you can see there’s a fair number of tags. The white ones seem to be pre-defined – these include banks, stations, building names etc. The coloured tags are text air tags that have been added by users themselves. They don’t tend to say anything very profound, and may remind you of your first few Twitter tweets, when you had to tell everyone that you were just having a cup of coffee / brushing your teeth.

Tap on an air tag, and it fills the screen. Wait a moment, and any text displayed on it will appear in another window along with the details of the user who uploaded it (not shown below).

sekai camera_shibuya

If you’re anywhere crowded (like Shibuya) there can be far too many tags to see any in detail. To deal with this there’s a built-in spiralator: tap and hold your finger on a tag for a few seconds and they’ll all arrange themselves in a neat rotating spiral allowing you to read them one by one.

sekai camera_spiral

Adding your own airtags is easy. Once you’ve registered (username / password, done in-app) just choose your tag type from the menu on the right hand side: Text, photo, or sound.

Then, enter your text / take your photo / record your audio, click on ‘post’ – and it’s up. It should then show up on your screen (and that of anyone else using the app in the area) within a few seconds. Here is Paul‘s head floating in a pub in Shibuya.

sekai camera_paul

The next thing to do is take a photo of the person you’ve just made an airtag of and get them to point at their own head. Believe me, it’s trickier than you’d thing as these tags tend to wobble quite a bit (thanks for your patience Jonny!)

sekai camera_jonni

In tag-rich areas there are some filters which may come in handy. Under ‘Filter’ you can choose a date range (anything from tags posted in the last 24 hours, to forever), and distance from your present location (50m – 300m).

You can also choose whether or not to show your own tags, other air tags, landmarks and shouts (a shout is an airtag that someone posts by clicking on ‘shout’ – this booms out through the virtual world and fills the screen of nearby users – as a shout might fill their ears).

There’s also a ‘pocket function’ – this stores all of your bookmarked tags, and will display them on a map.

Ok, that sounds like it coulld be fun – but is it actually useful?

Erm, in a word, no.

At least not yet – but expect that to change in the near future when the next update is released.

So why’s it not all that useful yet? Well, for a start, as mentioned above, it’s like the early days of Twitter when everyone was desperate to tell others that they were feeding the cat. There’s a lot of noise out there, and whilst the distance / time filters do help, they still don’t control whose tags you see and whose you don’t. Imagine a Twitter where you basically have to follow everyone near you.

Secondly, the limitations of the iPhone (notably the compass) mean that you don’t always get accurate accurate placing of air tags. This will of course improve with future hardware updates.

But having said that, this app is AMAZING! It’s such early days for this technology, and to have a smooth user experience at this stage is, in my book, quite staggering. We will undoubtedly see significant upgrades and additional filters / functionality added in the near future (this post will be updated with news on that in a few days).

In the meantime, I’m going to be busy filling Tokyo’s virtual AR world with quality photo tags of bowls of ramen and text tags saying “I’m her now”.