Monthly Archives: August 2017

Photography: An eye-popping Cathedral flowmotion

How exactly Rob Whitworth pulls off these vertiginous shots (drones, lifts, hidden cameras?), I couldn’t tell you, but it’s a fun breakneck tour no matter what:

Probably the world’s first cathedral flow motion. Something of a passion project for me getting to shoot my home town and capture it in it’s best light. Constructed in 1096 Norwich Cathedral dominates the Norwich skyline to this day. Was super cool getting to explore all the secret areas whilst working on the video.

NewImage

[Via]

AI learns about tattoo art

Rockin’: 

A couple of developers for the app Tattoodo wanted a better way to categorize all the tat pics they receive, so they built an algorithm. The pair created a neural-network and taught it how to use an iPhone camera to determine the style of a tattoo.

More images means more ideas being shared, more tattoos being categorized, and — perhaps one day soon — better recommendations to help inspire your next piece. 

NewImage

[YouTube]

Google introduces ARCore, Web augmented reality previews

I’m delighted that Google releasing a preview SDK of ARCore, bringing augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

The team writes,

It works without any additional hardware, which means it can scale across the Android ecosystem. ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview.

And:

We’re also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop. And we think the Web will be a critical component of the future of AR, so we’re also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit.

Super exciting times!

[YouTube]

Check out the crowdsourced “Eclipse Megamovie”

Google partnered with UC Berkeley and The Astronomical Society of the Pacific to create the Megamovie. Here’s how it all went down:

Over 1,300 citizen scientists spread out across the path of totality with their cameras ready to photograph the sun’s corona during the few minutes that it would be visible, creating an open-source dataset that can be studied by scientists for years to come. Learn about their efforts, and catch a glimpse of totality, in this video. Spoiler alert: for about two minutes, it gets pretty dark out.

Check out the results: 

This is a small preview of the larger dataset, which will be made public shortly. It will allow for improved movies like this and will provide opportunities for the scientific community to study the sun for years to come.

NewImage

[YouTube 1 & 2] [Via]

“Everyone sweeps the floor around here” — Wisdom from Adobe’s founders

I’ve found years’ worth of inspiration in the dichotomy expressed by Adobe founders John Warnock & Chuck Geschke:

The hands-on nature of the startup was communicated to everyone the company brought onboard. For years, Warnock and Geschke hand-delivered a bottle of champagne or cognac and a dozen roses to a new hire’s house. The employee arrived at work to find hammer, ruler, and screwdriver on a desk, which were to be used for hanging up shelves, pictures, and so on.
“From the start we wanted them to have the mentality that everyone sweeps the floor around here,” says Geschke, adding that while the hand tools may be gone, the ethic persists today.

— Page 27 of Inside the Publishing Revolution – The Adobe Story by Pamela Pfiffner

NewImage

Membit: “Pokémon Go for your Memories”

There’s something happening here/What it is, ain’t exactly clear… But it’s gonna get interesting.

Membit is a geolocative photo sharing app that allows pictures to be placed and viewed in the exact location they were captured.

When you make a membit, you leave an image in place for other Membit users to find and enjoy. With Membit, you can share the past of a place with the present, or share the present of a place with the future.

I’m reminded of various interesting “rephotography” projects that juxtapose the past with the present. Those seem not to have moved beyond novelty—but perhaps this could? (Or maybe it’ll just induce vemödalen.) Check it out:

NewImage

[Vimeo]

Adobe & Stanford collaborate to automate video editing

I’ve long been skeptical of automated video editing. As I noted in May,

My Emmy-winning colleague Bill Hensler, who used to head up video engineering at Adobe, said he’d been pitched similar tech since the early 90’s and always said, “Sure, just show me a system that can match a shot of a guy entering a room with another shot of the same thing from a different angle—then we’ll talk.” As far as I know, we’re still waiting.

Now, however, some researchers at Adobe & Stanford are narrowing the problem, focusing just on saving editors time via “Computational Video Editing for Dialogue-Driven Scenes”:

Given a script and multiple video recordings, or takes, of a dialogue-driven scene as input (left), our computational video editing system automatically selects the most appropriate clip from one of the takes for each line of dialogue in the script based on a set of user-specified film-editing idioms (right).

Check out the short demo (where the cool stuff starts ~2 minutes in):

NewImage

Style transfer & computer vision as a service

The makers of the popular Prisma style-transfer app are branching into offering an SDK:

[U]nderstand and modify the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API or SDK for iOS or Android apps.

One example use is Sticky AI, a super simple app for creating selfie stickers & optionally styling/captioning them.

According to TechCrunch, Prisma shares at least one investor with Fabby, the tech/SDK that Google acquired last week. Meanwhile, there’s also YOLO: Real-Time Object Detection:

NewImage

This mass proliferation of off-the-shelf computer vision makes me think of Mom & Pop at Web scale: It’s gonna enable craziness like when Instagram was launched by two (!) guys thanks to the existence of AWS, OAuth, etc. It’ll be interesting to see how, thanks to Fabby & other efforts, Google can play a bigger part in enabling mass experimentation.

[YouTube]