Not a real Lytro, but a scale model made of solid freaking metal.
I have a one-week opportunity to pre-order a Lytro light-field camera. It’s a revolutionary way of thinking about focus, but there are still a lot of unanswered questions, and I haven’t decided yet if I’m willing to bet $400 on Lytro having the right answers.
They’re pitching their product as a solution for the focus problem, framing the technology to make the camera seem more accessible to the everyman. This is all wrong. Auto-focus is smarter than the everyman, and there is no focus problem. Fortunately for Lytro’s marketing team, this product has landed squarely in the sights of the hardcore photography enthusiast (and based on comments on Lytro’s blog today, looks like they weren’t prepared for that). Hardcore enthusiasts understand that the point of this technology is to create a new photographic genre, to use interactive focus to tell a story.
I’m approaching this format moreso like video. Single images tell a story, but interactive images develop as you explore them. These storylines could consist of unexpected objects in the fore/background, different expressions on people’s faces as they react to an event, a sense of moving through a scene, accentuating infinity…and I can guarantee that there will be Lytro porn.
The hardware, in this case, isn’t enough. To nurture their niche userbase, Lytro also needs to create a system that connects users to share techniques and inspiration. Enthusiasts just want a creative outlet and recognition, so this could be very compelling. When outsiders stumble across this energy, they’ll be drawn in and want to belong.
But there are considerable downsides. Lytro’s v1 product offers no control of the image. I could live with auto exposure, maybe, but it gives me pause that these images won’t be compatible with conventional photo editing tools. No brightness, no levels, no color balance, no pixel destroying Instagram filters. As much as I would love to see a revival of doing that shit with glass, Lytro would do well to release an image processing library and give 3rd party developers a place in their community, too. This has an added benefit of maintaining momentum by giving users new tools to play with after the initial novelty wears off.
And in a few years, Adobe will be able to do this with photos from your existing camera. At the Adobe Max conference last week, engineers gave a sneak peek into technology that can take a blurry image taken from any camera, apply motion sensing algorithms that detect exactly how you wiggled when you took the photo, then line those pixels back up again to create a sharp image. It’s still no small feat to compile this data into an interactive file format, but the company who extends photographers’ existing toolsets with a pure software play will ultimately win.
I’m eager to explore a new creative format NOW, but $400 could also buy an iPhone 4S or the Galaxy Nexus next month. Or half a Nikon 1. Like most other gadget purchases, I’ll probably spend several days trying to talk myself out of it, then go ahead and buy one just so I’ll stop wasting my time dwelling on it.