Brave.
New.
World.
🚀
Starhopper flight test drone footage pic.twitter.com/ilvALgrpCo
— SpaceX (@SpaceX) August 28, 2019
Brave.
New.
World.
🚀
Starhopper flight test drone footage pic.twitter.com/ilvALgrpCo
— SpaceX (@SpaceX) August 28, 2019
This “AI Structure” feature looks neat, but I take slight exception to the claim to being “the first-ever content-aware tool to improve details only where needed.” The Auto Enhance feature built into Google+ Photos circa 2013 used to do this kind of thing (treating skin one way, skies another, etc.) to half a billion photos per day.
Of course, as I learned right after joining the team, Google excels at doing incredibly hard stuff & making no one notice. I observed back then that we could switch the whole thing off & no one would notice or care—and that’s exactly what happened. Why? Because A) G+ didn’t show you before/after (so people would never know what difference it had made) and B) most people are to photography as I am to wine (“Is it total shite? No? Then good enough for me”). Here at least the tech is going to that tiny fraction of us who actually care—so good on ‘em.
[YouTube]
CNET writes
The app, called Notable Women, was developed by Google and former US Treasurer Rosie Rios. It uses augmented reality to let people see what it would look like if women were on US currency. Here’s how it works: Place any US bill in front of your phone’s camera, and the app uses digital filters — like one you’d see on Instagram or Snapchat — to overlay a new portrait on the bill. Users can choose from a database of 100 women, including the civil rights icon Rosa Parks and astronaut Sally Ride.
[YouTube]
“How about flying through obstacles at full speed, backwards?” How indeed. Trying these shots with my Mavic would earn a big “no, gracias.” It’ll be interesting to see what all is new in the forthcoming device:
[YouTube]
Man, if these folks ever need to set up a shoot, I know a couple of lads who’ll happily work for bricks—myself included. 😌
As Kottke notes, “This has some definite Star Guitar + Wallace & Gromit vibes.”
[YouTube]
Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]
The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.
Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.
Whoa:
Creator Aryeh Nirenberg writes,
A timelapse of the Milky Way that was recorded using an equatorial tracking mount over a period of around 3 hours to show Earth’s rotation relative to the Milky Way.
I used a Sony a7SII with the Canon 24-70mm f2.8 lens and recorded 1100 10″ exposures at a 12-second interval. All the frames were captured at F/2.8 and 16000iso.
Kinda reminds me of “Turn Down For Spock”:
“Why doesn’t it recognize The Finger?!” asks my indignant, mischievous 10-year-old Henry, who with his brother has offered to donate a rich set of training data. 🙃
Juvenile amusement notwithstanding, I’m delighted that my teammates have released a badass hand-tracking model, especially handy (oh boy) for use with MediaPipe (see previous), our open-source pipeline for building ML projects.
Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.
🙌
I’ve gotta give this new capability a shot:
To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”
“This is the strangest life I’ve ever known…” 😌
[YouTube]
Oh, deepfakes… 😲 #WhatHathGodWroght
I watched this video last night and it's still freaking me out. A deep fake where Bill Hader *turns into* Tom Cruise and Seth Rogan *while impersonating them*. (via @MohamedGhilan / Ctrl Shift Face on YouTube) pic.twitter.com/59evJ5Etfi
— Gavin Sheridan (@gavinsblog) August 12, 2019
Wow:
Per The Verge:
The glasses’ marquee feature is a second camera, which enables Spectacles to capture depth for the first time. Snap has built a suite of new 3D effects that take advantage of the device’s new depth perception ability. They will be exclusive to Spectacles, and the company plans to let third-party developers design depth effects starting later this year.
This time around, Snap is offering a new way to view snaps taken through Spectacles: an included 3D viewer resembling Google Cardboard. (The Spectacles 3D viewer is made of cardboard as well.)
[YouTube]
(May as well keep this Adobe-week content train rolling, amirite?)
If you’d asked me the odds of getting a tweak this deeply nerdy into Camera Raw, I’d probably have put it around 1 in 100—but dang, here we are! This is a godsend for those of us who like to apply area-based adjustments like Clarity & Dehaze to panoramas. Russell Brown shows the benefit below.
A note of caution, though: to my partial disappointment, this doesn’t (yet) work when applying Camera Raw as a filter, so if you want to use it on JPEGs, you’ll need to open them into ACR via Bridge (Cmd-R). And yes, my little Obi-Wan brain just said, “Now that’s a workflow I haven’t heard of in a long time…” Or, if you’re coming from Lightroom Classic, you’ll need to open the image as a Smart Object in Photoshop—clunky (though temporary, I’m told), but it beats the heck out of trying to fix seams manually.
[Vimeo]
At risk of making this an all-Adobe week of posts (no subtext there, honest!), you should think about coming to work with my wife & her crew in the rockin’ Digital Video & Audio group:
My old pals Will & Bryan and their teams have been hard at work on the brushing-savvy iPad app Fresco (see previous thoughts). Gizmodo offers a quick look at its current state, and Bryan has shared some perspective on its development.
[YouTube]
Ah—just in time for me to play with speed in the dronie I took last week: Premiere Rush has added the ability to selectively speed up & slow down chunks of footage via its iOS, Android, and desktop versions.
Our #1 requested feature is available today in version 1.2 — Speed!
Slow down or speed up footage, add adjustable ramps, and maintain audio pitch — speed in Rush is intuitive for the first-time video creator, yet powerful enough to satisfy video pros who are editing on the go.
Check out the quick sample below & dig into details here.
[Via Margot Nack]
Sure, tech like this will almost certainly precipitate election chaos if not war—but hey, how soon can I put it into my mustache-craving 10-year-old’s hands?
Previously from the same creator (Dr. Fakenstein):
I’ve been collaborating with these folks for a few months & am incredibly excited about this feature:
With a beta feature called Live View, you can use augmented reality (AR) to better see which way to walk. Arrows and directions are placed in the real world to guide your way. We’ve tested Live View with the Local Guides and Pixel community over the past few months, and are now expanding the beta to Android and iOS devices that support ARCore and ARKit starting this week.
Like the Dos Equis guy, “I don’t always use augmented reality—but when I do, I navigate in Google Maps.” We’ll look back at these first little steps (no pun intended) as foundational to a pretty amazing new world.
[Via]
Hey gang—I know I greatly flatter myself in thinking that my voice here will be much missed if I go quiet for a bit, especially without notice, but for what it’s worth I’m enjoying some very welcome digital downtime with family in friends in Minnesota.
Being minutes away from wrapping up the celebration of my 44th (!) solar orbit, I wanted to say thanks for being one of those still crazy enough to traipse over here periodically & browse my random finds. Fourteen (!!) years after I started this racket, it still remains largely fun & rewarding. I hope you agree, and I’m grateful for your readership.
Now please excuse me for just a few more days while I get back to swamping my hard drive with a crushing backlog of drone, GoPro, Insta360, iPhone, and Osmo shots. 🙃
Oh, and for some dumb reason Google Maps insists on starting this pano (showing where we’re staying) pointed straight down into the pitch-black lake. You can drag it upwards and/or zoom out while I go file a bug/feature requests. The work is never done—another possible source of gratitude.
Given that I find regular Mavic flying stressful enough even with 6-way collision avoidance activated, and given that I sadly abandoned the DJI goggles I bought, DJI’s Professional Digital FPV System almost certainly isn’t for me—but dang, it still looks fun as hell:
[Via]
Bill Duke (known elsewhere for having him some fun) offers one of the reactions of the century to a generally incomprehensible stream of words—well worth the two minutes:
[YouTube]
“There’s, like, too many generic middle-aged white guys, man!” 😝 Now peace out & pass the Scooby snacks:
[YouTube]