The “Faces of Frida” brings Frida Kahlo’s most iconic artwork together for the first time in the largest digital retrospective on Frida, that embraces her work, life, and legacy. Discover several of her pieces that have never been viewable online, as well as personal photographs, letters, journals, clothes, and early sketches of some of Kahlo’s finest work, which were hidden from the world on the back of finished paintings.
10+ years ago, I really hoped we’d get Photoshop to understand a human face as a 3D structure that one could relight, re-pose, etc. We never got there, sadly. Last year we gave Snapseed the ability to change the orientation of a face (see GIF)—another small step in the right direction. Progress marches forward, and now USC prof. Hao Li & team have demonstrated a method for generating models with realistic skin from just ordinary input images. It’ll be fun to see where this leads (e.g. see previous).
Part of me says, “What great new tools for expressive video editing!”
The other part says, “This will not end well…”
[W]e are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor… [W]e can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing.
I’m getting way too big a kick out of the work of kinetic artist & toymaker Joseph Herscher:
Khoi Vinh writes, in an inventory worth of Stefon (“this place has everything…”),
Herscher’s pièce de résistance may be “The Cake Server,” shown above: a gorgeous monstrosity that brings together melting butter, a glass of juice that pours its contents into itself, a baby using a smartphone and much more to serve a slice of upside-down cake to a plate in its God-intended manner of delivery. It’s a marvel to behold.
The Favorite (star) button will only appear on photos in your own library, allowing you to mark an individual item as a favorite which, in turn, will automatically populate a new photo album with just your favorite photos. […]
Meanwhile, the heart icon is Google Photos’ version of the “like.” This will appear only on those photos that have been shared with you from your family and friends.
Perhaps my earliest memory (circa age 4) is of watching a giant tornado bounce across the plains of southern Wisconsin, blazing through arcing power lines & bounding over a farmhouse as my mom debated whether to force me & my grandparents out of the car to shelter in a ditch. “It looks like a big ice cream cone!” I said.
Photographer Mike Olbinski succeeded in capturing a half-mile-wide cone of his own in this striking clip:
“God takes care of old folks and fools,” said Chuck D, and after miraculously not parking my drone at the bottom of Three Mile Slough thanks to high crosswinds & power lines, I’m grateful to somehow get it back with this footage. (Hat-tip to the presumably freaked-out bird who makes a cameo & who didn’t try to peck my bird out of the sky.)
Google’s been working on interesting approaches to solving the classic “cocktail party problem,” i.e. isolating specific human voices in a noisy room. You can read all about how it works, or just check it out in action:
Here’s how it can improve otherwise tangled transcription (bring on the Robert Altman movies!):
Heh—having wrapped up my Adobe career working on an unsuccessful “storytelling!” tool (complete with its own storyteller 🙄), I had to laugh as I winced at this one. Stefan Sagmeister craps on “the mantle of bullshit” adopted by people trying to embellish their work with some stolen valor. (Bonus & unremarked irony: This beatdown was apparently sponsored by “Crafted Stories: Brand Storytelling.” Puzzle on that one.)
Heh—years ago College Humor parodied Photoshop demo videos (down to the point of the presenter claiming to be Bryan O’Neil Hughes), but I hadn’t seen this one—in which “Hughes” is a guest of the North—until now:
“Material Theming” effectively fixes a core gripe of the original “Material Design”: that virtually every Android app looks the “same,” or made by Google, which isn’t ideal for brands.
The tool is currently available on Sketch, and you can use it by downloading the “Material” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.
I’m oddly intrigued by the immediacy of this 107-year-old archival footage showing New York City. As Khoi Vinh explains,
The footage has been altered in two subtle but powerful ways: the normally heightened playback speed of film from this era has been slowed down to a more “natural” pace; and the addition of a soundtrack of ambient city sounds, subtly timed with the action on screen.
The open-source Lantern project promises to transform any surface into AR using Raspberry Pi, a laser projector, and Android Things:
Rather than insisting that every object in our home and office be ‘smart’, Lantern imagines a future where projections are used to present ambient information, and relevant UI within everyday objects. Point it at a clock to show your appointments, or point to speaker to display the currently playing song. Unlike a screen, when Lantern’s projections are no longer needed, they simply fade away.
Man, I’m really eager to see what the Micronaxx can do with this:
Tour Creator […] enables students, teachers, and anyone with a story to tell, to make a VR tour using imagery from Google Street View or their own 360 photos. The tool is designed to let you produce professional-level VR content without a steep learning curve. […]
Once you’ve created your tour, you can publish it to Poly, Google’s library of 3D content. From Poly, it’s easy to view. All you need to do is open the link in your browser or view in Google Cardboard.
Starting today, you may see a new photo creation that plays with pops of color. In these creations, we use AI to detect the subject of your photo and leave them in color–including their clothing and whatever they’re holding–while the background is set to black and white. You’ll see these AI-powered creations in the Assistant tab of Google Photos.
Thoughts? If you could “teach Google Photoshop,” what else would you have it create for you?
My teammates George & Tyler have been collaborating with creative technologist Dan Oved to enable realtime human pose estimation in Web browsers via the open-source Tensorflow.js (the same tech behind the aforementioned Emoji Scavenger Hunt). You can try it out here and read about the implementation details over on Medium.
Ok, and why is this exciting to begin with? Pose estimation has many uses, from interactive installations that react to the body to augmented reality, animation, fitness uses, and more. […]
Heh—let’s see what you & your phone can see together in Google’s Emoji Scavenger Hunt:
I’m honestly not sure what to make of this wacky-looking new device, but it’s weird/interesting enough to share. I can pretty definitely say that no one wants to refocus photos/video after the fact (RIP, Lytro—and have you ever done this with an iPhone portrait image, or even known that you can?), but simply gathering depth data in 180º is interesting, as (maybe) is 360º timelapse. Check it out:
Yesterday, if you didn’t own Photoshop, the cost of getting started was $700. Today it’s $20*.
Yesterday if you didn’t own the Master Collection, the cost was $2,600. Today it’s $50–or if you own a CS3 or later app, just $30 (!).
Yesterday if you wanted to reach tablets via Adobe’s Digital Publishing Solution, the cost was $400 per publication. Soon it’ll be free, for unlimited publications, once you subscribe to Creative Cloud.
Adobe will offer K-12 schools its full suite of Creative Cloud software for $5 per student per year, starting May 15, it said Thursday. That’s a radical discount compared […] earlier education pricing of $240 per year […] $360 after the first year.
Kids can use on home computers when they sign in, Adobe said.
Scattered throughout the place — which seems to be a recreation of a real Oakland home — were cut-out squares floating in the air. When I hovered over them with a cursor, I saw thumbnails of photos and videos, all of which were supposedly taken in the room that I was in. When I clicked on the thumbnails, I teleported over to them so that I could see the photos and videos up close. One was a photo of a family, while another was a short video clip of a young couple getting ready for prom.
Team members took 50,000 photos of Palmyra with drones that allowed them to avoid landmines. In western Syria, they took 150,000 photos of Crac des Chevaliers, one of the world’s most famous Crusader castles now damaged by fighting, as part of a project for UNESCO. And they surveyed Aleppo’s Old City, the devastated historic quarter known for its 13th century citadel, ancient mosque and vibrant souk, or bazaar – all now in ruins.
Having biked every day through Times Square, I’m pretty sure I burned through roughly 8.5 of my 9 lives. Now being an old breadwinner, I’m trying to keep my noggin intact while keeping my arteries at least moderately pliable, so the Lumos Helmet ($180) seems pretty dope:
After installing Lumos’ Apple Watch app, the Watch will record how its wearer makes their left and right turn gestures. When they make them in the future while the app is running, it’ll activate the corresponding signal on the helmet. The Watch will vibrate to remind wearers that it’s still blinking, and they’ll have to shake their hand to turn it off. The helmet is supposed to automatically detect when you’re braking, so there doesn’t appear to be a gesture for that.