Join us for a fly-through of the International Space Station. Produced by Harmonic exclusively for NASA TV UHD, the footage was shot in Ultra High Definition (4K) using a fisheye lens for extreme focus and depth of field.
Okay, I hesitated to share this as I’m allergic corporate self-congratulation, but A) it’s some pretty amazing aerial filmmaking (including in thunderstorms!), and B) the chase of the Androids is just so weird—and get only weirder/funnier as it progresses. That detail reminds me of Khoi Vinh’s smart observation from a couple years back:
Apple fans like myself often criticize Google for doing things that Apple would never do, and Smarty Pins is a prime example of that. Aside from being an unfair criticism, it’s pointless. The fact that Google endeavors to produce silly things like this is on the whole a positive thing, I believe. It’s acting according to its own compass, which is what every company should be doing.
Diffusion Choir is a kinetic sculpture that uses 400 folding elements to reveal the movements of an invisible flock of birds. Its movements are always changing, driven by custom software running a flocking algorithm.
The sculpture hangs in the atrium of 650 East Kendall Street in Cambridge, Massachusetts, and was commissioned by BioMed Realty.
We love VR. We love taking pictures. So we figured, why not try smashing the two together?
Sprayscape is a quick hack using the phone’s gyroscope to take pictures on the inside of a 360-degree sphere. Just point your phone and tap the screen to spray faces, places, or anything else onto your canvas. Like what you’ve captured? Share your creations via a link and your friends can jump into your scapes and have a look around using their phones or even Google Cardboard.
Sprayscape is built in Unity with native Android support. Sprayscape maps the camera feed on a 360 degree sphere, using the Cardboard SDK to handle gyroscope data and the NatCam Unity plugin for precise camera control.
The GPU makes it all possible. On user tap or touch, the camera feed is rendered to a texture at a rate of 60 frames per second. That texture is then composited with any existing textures by a fragment shader on the GPU. That same shader also creates the scape you see in app, handling the projection from 2D camera to a 360 sphere.
When a user saves a scape, a flat panorama image is stored in the app data. When a user shares a scape, the three.js web viewer takes that flat image and wraps it to a sphere, making it navigable on mobile web by panning, tilting, and moving your device.
Chuck that Dan Brown shite into some molten rock & peep this Inferno instead. (I mean, I’d listen to Werner read the phone book, and here he is talking volcanoes, for God’s sake.)
Werner Herzog’s latest documentary, Into the Inferno, heads just where its title suggests: into the red-hot magma-filled craters of some of the world’s most active and astonishing volcanoes—taking the filmmaker on one of the most extreme tours of his long career. From North Korea to Ethiopia to Iceland to the Vanuatu Archipelago, humans have created narratives to make sense of volcanoes; as stated by Herzog, “volcanoes could not care less what we are doing up here.” Into the Inferno teams Herzog with esteemed volcanologist Clive Oppenheimer to offer not only an in-depth exploration of volcanoes across the globe but also an examination of the belief systems that human beings have created around the fiery phenomena.
Many years ago the Photoshop team collaborated with Stanford professor Marc Levoy & his team. We were especially interested in their work to create a programmable device—charmingly known as the “Frankencamera”—that could run emerging algorithms to guide both capture & processing.
Fast forward to today, and Marc is leading a team of researchers at Google who just helped ship the new Pixel phone. As Marc notes, “The French agency DxO recently gave the Pixel the highest rating ever given to a smartphone camera.” On the Verge he provides lots of interesting details about how the camera works. For instance,
The Hexagon digital signal processor in Qualcomm’s Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. “The moment you press the shutter it’s not actually taking a shot — it already took the shot,” says Levoy. “It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it’s already captured, and combines them together.”
Read on for more—or if you just want some quick highlights, check out this two-minute tour shot entirely with a Pixel:
Turn your two-bit doodles into fine artworks with deep neural networks, generate seamless textures from photos, transfer style from one image to another, perform example-based upscaling, but wait… there’s more! (An implementation of Semantic Style Transfer.)
For many years our friends at Teehan + Lax have been producing invaluable GUI kits for iOS developers. Now they’re part of Facebook Design & have created a version for iOS 10 in PSD, Sketch, and (interestingly) Figma formats. Enjoy.
Everyone, a friend once said, is always asking the same thing over & over based on who they are. The words change, but the underlying question for each tends to be the same:
Project managers are always asking, “Are you efficient? Are you effective?”
Artists & product managers are asking, “Do you get it?” (What game are we playing, and how do we keep score?)
Engineers are always asking, “Are you a moron?” (Did you consider this, think of that, etc.?)
I thought of this on Monday as Buddhist nun Thubten Chodron spoke at Google. Instead of evaluating the what of things (what did you accomplish, create, earn, etc.), she emphasized weighing the why. What is your intention? Is, for example, a charitable contribution really driven by love of others, or is it meant to stroke your ego?
I can’t claim any deep insight here, but I was struck by the parallel & by the wisdom—in life & in work, especially PM work—of pursuing the Five Whys. Hmm; more thinking to be done.
What a meaningful use of technology. The Mine Kafon Drone claims to be “up to 20 times faster than traditional demining technologies,” and “As well as being safer, it also up to 200 times cheaper.” Check it out:
Embedded into both the base and the Cloud are magnetic components that allow the cloud to float 1-2 inches off the base. While the base itself must remain plugged in a rechargeable lithium ion battery powers the Cloud and enables a totally wireless and unobstructed levitation.
While the Smart Cloud is an actual product that you can purchase (for a whopping $3,360), no release information has yet been announced for Making Weather, although it likely will fall in a similarly expensive range.
Action Button, which is a snippet of code that lives on publishers’ article pages and gives their readers the option to take direct action. Speakable’s technology is able to understand the content and sentiment of an article and match it with the proper non-profit partner.
From there, users can click the Action Button to send an email to a legislator or tweet to a decision-maker or even make a donation.
I’m loving the little galleries I get like “John + Finn,” “Recent highlights of Henry,” etc. The team writes,
First, Google Photos will now help you rediscover old memories of the people in your most recent photos. As your photo library continues to grow, we hope that features like this one make it easier to look back at your fondest memories.
Second, we’re making it easier to look over the most recent highlights from your photos. If you take a lot of photos of your child, for example, you may occasionally get a card showing the best ones from the last month. (Hint hint: grandparents would love to see these!)
“We’ve always made animations from photos,” the team writes, “but now we make animations from your videos, too. And not just any videos. We look for segments that capture activity — a jump into the pool, or even just an adorable smile — and create short animations that are easy to share.”
Here’s one it generated of the Micronaxx:
And it made another from the luau we attended last week:
As before, you don’t need to do anything: just let Photos back up your vids, then watch for Assistant notifications.
Using Photoshop, X-acto knives, glue, and more, Matthew Reinhart shapes paper into amazing mechanical structures:
Using scissors, tape, and reams of creativity, Matthew Reinhart engineers paper to bend, fold, and transform into fantastic creatures, structures and locales. By adjusting the angles of folds and the depth of layers, Reinhart animates his subjects to tell dramatic stories that literally pop off the page.
Hmm—let’s see what develops here. PetaPixel explains,
First, it can manipulate an image based on very basic coloring, sketching, or warping commands. So you can change the shape, color, and size of an object in just a brush stroke or two, with the final product maintaining as natural a look as possible.
Second, it can actually generate images based on a rudimentary sketch.
PetaPixel also points out a “neural photo editor” from researchers at the University of Edinburgh:
[I]t uses machine learning to predict and apply the changes you’re intending to make. For example, if you select a bright color and start painting over someone’s hair, it will assume you want to turn them blonde; being using longer brush strokes, and that blonde hair grows longer.
You simply select a color using their “contextual paintbrush” and have at it. The most basic inputs can produce extreme changes.
The Adobe Design Achievement Awards is a global digital media competition for student creators. Connected to industry professionals, academic leaders, and top brands, the ADAA aims to launch the next generation of student careers.
PetaPixel: Qualcomm Clear Sight will put dual cameras on a lot of smartphones: “It works exactly like Huawei’s P9 camera system—and not like the iPhone 7 Plus’s—in that one sensor is color and the other is black and white. So while the iPhone focuses on giving you more zoom power with the second camera, the second sensor in the Qualcomm module is all about capturing more detail and better low light shots.”