How exactly Rob Whitworth pulls off these vertiginous shots (drones, lifts, hidden cameras?), I couldn’t tell you, but it’s a fun breakneck tour no matter what:
Probably the world’s first cathedral flow motion. Something of a passion project for me getting to shoot my home town and capture it in it’s best light. Constructed in 1096 Norwich Cathedral dominates the Norwich skyline to this day. Was super cool getting to explore all the secret areas whilst working on the video.
A couple of developers for the app Tattoodo wanted a better way to categorize all the tat pics they receive, so they built an algorithm. The pair created a neural-network and taught it how to use an iPhone camera to determine the style of a tattoo.
More images means more ideas being shared, more tattoos being categorized, and — perhaps one day soon — better recommendations to help inspire your next piece.
I’m delighted that Google releasing a preview SDK of ARCore, bringing augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.
It works without any additional hardware, which means it can scale across the Android ecosystem. ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview.
And:
We’re also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop. And we think the Web will be a critical component of the future of AR, so we’re also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit.
Google partnered with UC Berkeley and The Astronomical Society of the Pacific to create the Megamovie. Here’s how it all went down:
Over 1,300 citizen scientists spread out across the path of totality with their cameras ready to photograph the sun’s corona during the few minutes that it would be visible, creating an open-source dataset that can be studied by scientists for years to come. Learn about their efforts, and catch a glimpse of totality, in this video. Spoiler alert: for about two minutes, it gets pretty dark out.
Check out the results:
This is a small preview of the larger dataset, which will be made public shortly. It will allow for improved movies like this and will provide opportunities for the scientific community to study the sun for years to come.
I’ve found years’ worth of inspiration in the dichotomy expressed by Adobe founders John Warnock & Chuck Geschke:
The hands-on nature of the startup was communicated to everyone the company brought onboard. For years, Warnock and Geschke hand-delivered a bottle of champagne or cognac and a dozen roses to a new hire’s house. The employee arrived at work to find hammer, ruler, and screwdriver on a desk, which were to be used for hanging up shelves, pictures, and so on.
“From the start we wanted them to have the mentality that everyone sweeps the floor around here,” says Geschke, adding that while the hand tools may be gone, the ethic persists today.
There’s something happening here/What it is, ain’t exactly clear… But it’s gonna get interesting.
Membit is a geolocative photo sharing app that allows pictures to be placed and viewed in the exact location they were captured.
When you make a membit, you leave an image in place for other Membit users to find and enjoy. With Membit, you can share the past of a place with the present, or share the present of a place with the future.
I’m reminded of various interesting “rephotography” projects that juxtapose the past with the present. Those seem not to have moved beyond novelty—but perhaps this could? (Or maybe it’ll just induce vemödalen.) Check it out:
I’ve long been skeptical of automated video editing. As I noted in May,
My Emmy-winning colleague Bill Hensler, who used to head up video engineering at Adobe, said he’d been pitched similar tech since the early 90’s and always said, “Sure, just show me a system that can match a shot of a guy entering a room with another shot of the same thing from a different angle—then we’ll talk.” As far as I know, we’re still waiting.
Given a script and multiple video recordings, or takes, of a dialogue-driven scene as input (left), our computational video editing system automatically selects the most appropriate clip from one of the takes for each line of dialogue in the script based on a set of user-specified film-editing idioms (right).
Check out the short demo (where the cool stuff starts ~2 minutes in):
The makers of the popular Prisma style-transfer app are branching into offering an SDK:
[U]nderstand and modify the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API or SDK for iOS or Android apps.
One example use is Sticky AI, a super simple app for creating selfie stickers & optionally styling/captioning them.
This mass proliferation of off-the-shelf computer vision makes me think of Mom & Pop at Web scale: It’s gonna enable craziness like when Instagram was launched by two (!) guys thanks to the existence of AWS, OAuth, etc. It’ll be interesting to see how, thanks to Fabby & other efforts, Google can play a bigger part in enabling mass experimentation.
Eclipse-chaser Mike Kentrianakis of the American Astronomical Society boarded a specially rescheduled Alaska Airlines flight (interesting details here) to capture last year’s eclipse over the Pacific. (Tangentially related: Don’t forget that you can contribute creations to NASA’s Eclipse Art Quilt Project.)
If you liked the rich, trippy visuals of the previous post, check out this quick making-of from their creator:
I’m Dan Marker-Moore. Follow me on my journey through Hong Kong and Shanghai and learn how I stitch together hundreds of photos to make one Time Slice image. I use Adobe Lightroom to color correct and After Effects to composite. Available in 4k UHD!
This won’t seem like much right now, I’m sure—but I’m really excited. Per TechCrunch:
The search and Android giant has acquired AIMatter, a startup founded in Belarus that has built both a neural network-based AI platform and SDK to detect and process images quickly on mobile devices, and a photo and video editing app that has served as a proof-of-concept of the tech called Fabby.
In a lot of ways it’s the next generation of stuff we started developing when I joined Google Photos (anybody remember Halloweenify?). If you’ve ever hand-selected hair in Photoshop or (gulp) rotoscoped video, you’ll know how insane it is that these tasks can now be performed in realtime on a friggin’ telephone.
I know, I know—you think you’ve seen it all a hundred times, but I’d be surprised if you didn’t enjoy this mesmerizing work by Tyler Hulett:
Starry skies swirl and reel above Oregon. Each frame is an independent star trail photograph, and most of these clips represent an entire night of shooting somewhere across the state of Oregon. In a few clips, motion control panning leads to otherworldly patterns. No artificial effects; just stacking. Only one DSLR shutter was blown to make this film.
After leaving my team at Google, Dmitry Shapiro has set up Metaverse, a drag-and-drop authoring platform for app creation. Here the team shows how to create a “Not Hotdog”-style app in just a couple of minutes without writing any code:
I love this kind of cinematic Inside Baseball. As Kottke writes,
This is a clever bit of TV/film analysis by Evan Puschak: he reconstructs the Loot Train Battle from the most recent episode of Game of Thrones using clips from other movies and TV shows (like 300, Lord of the Rings, Stagecoach, and Apocalypse Now). In doing so, he reveals the structure that many filmed battle scenes follow, from the surprising enemy attack presaged by the distant sound of horses (as in 300) to the quiet mid-chaos reflection by a shocked commander (as in Saving Private Ryan).
The Big Red A Took My Baby Away this weekend—but it was for a good cause: Margot hosted a diverse panel of women film & TV editors at the American Cinema Editors (ACE) EditFest. They shared stories of how they’ve broken into & succeeded in the industry. Scrub ahead ~8 minutes to when the conversation starts. (Great work, M!)
A bit long but totally compelling: Check out how the GoT crew used zip-lining cameras, drones, custom camera vehicles, and more to create the epic fire-breathing finale from Sunday’s episode:
Back in 2011 we built a neat app called Adobe Color Lava (demo)—a Photoshop companion app that enabled color mixing on iPad. Later I followed the researchers behind it, Steve DiVerdi & Aravind Krishnaswamy, to Google Photos. Now Steve (who was instrumental in Photoshop 3D-based Mixer Brush & more) has returned to Adobe & is working on the blobtacular Playful Palette. Check it out in action:
Our approach is to instrument the environment leaving the user unburdened of any equipment, creating a seamless walk-up-and-play experience. We demonstrate this technology in a series of vignettes featuring humanoid animals.
Participants can not only see and hear these characters, they can also feel them on the bench through haptic feedback. Many of the characters also interact with users directly, either through speech or touch.
Kee-rist… as if my birthday tomorrow didn’t already have me contemplating my mortality, Adobe’s new FaceStyle Web appreally has me looking old. Take it for a spin, and may you get more flattering results than Old Man Nack did.
[T]he prompt of “A yellow bird with a black head, orange eyes, and an orange bill” returned a highly detailed image. The algorithm is able to pull from a collection of images and discern concepts like birds and human faces and create images that are significantly different than the images it “learned” from.
Whoa. This new technique from researchers at NVIDIA and UCSB can mix wide-angle and telephoto perspective into single frames. As PetaPixel explains,
First, you need to shoot a “stack” of photos with a fixed focal length. Starting from a distance, you move closer to your subject with each new shot. […]
The framework allows you to split up a scene based on depth, and assign a different focal length perspective to each of those depths. You can make the foreground look like it was shot with a telephoto lens and the background look like you used a wide-angle one.
Watch it in action (and skip ahead ~2 minutes to get to the wow stuff):
Okay, okay—no one has said publicly that Showtime is using Adobe’s Character Animator app for this new spinoff of Stephen Colbert’s Cartoon Trump character, but come on, I’d be shocked if it weren’t powering the production. I love this live-animated character, and even more I love how the tech enables realtime visual expressivity. It should be a fun show, and in the meantime, here’s a segment produced for Colbert’s Showtime election special:
“Zane and I met on Tinder, and I wanted her to relive the experience of our first date, so I decided to mock-up my own version of Tinder,” said Lee… He created an immersive treasure hunt for Zane, mimicking the buttons and interactions of Tinder. The prototype led her from their home to the street corner where they first met, then on to the coffee shop where they had their first date.
Check out the whole story on the Adobe XD blog, as Lee (a non-designer/coder) used the app to create his prototype.
“Teaching Google Photoshop.” That’s the three-word mission statement I chose upon joining Photos. I meant it as shorthand for “getting computers to see & think like artists.” Now researchers are enabling that kind of human-savvy adjustment to run in realtime, even on handheld devices:
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google are presenting a new system that can automatically retouch images in the style of a professional photographer. It’s so energy-efficient, however, that it can run on a cellphone, and it’s so fast that it can display retouched images in real-time, so that the photographer can see the final version of the image while still framing the shot.
And yes, it’s a small world: “The researchers trained their system on a data set created by Durand’s group and Adobe Systems;” and Jiawen interned at Adobe; and then-Adobe researcher Aseem Agarwala collaborated with Frédo before joining Google.
The work on this film began on March 28th and ended June 29th. There were 27 total days of actual chasing and many more for traveling. I drove across 10 states and put over 28,000 new miles on the ol’ 4Runner. I snapped over 90,000 time-lapse frames. I saw the most incredible mammatus displays, the best nighttime lightning and structure I’ve ever seen, a tornado birth caught on time-lapse and a display of undulatus asperatus that blew my mind. Wall clouds, massive cores, supercell structures, shelf clouds…it ended up being an amazing season and I’m so incredibly proud of the footage in this film.