With the latest version of the image.canon app (available on Android or iOS) and a compatible Canon camera, you can choose to automatically transfer original quality photos to Google Photos, eliminating the hassle of using your computer or phone to back them up.
In addition to a compatible Canon camera and the image.canon app, you’ll also need a Google One membership to use this feature. To help get started, Canon users will get one month of Google One free, providing access to up to 100 GB of cloud storage, as well as other member benefits, such as premium support from Google experts and family sharing.
Awesome work by the team. Come grab a copy & build something great!
The ML Kit Pose Detection API is a lightweight versatile solution for app developers to detect the pose of a subject’s body in real time from a continuous video or static image. A pose describes the body’s position at one moment in time with a set of x,y skeletal landmark points. The landmarks correspond to different body parts such as the shoulders and hips. The relative positions of landmarks can be used to distinguish one pose from another.
Some of the creatures include the Aegirocassis, a sea creature that existed 480 million years ago; a creepy-looking ancient crustacean; and a digital remodel of the whale skeleton, which is currently in view in the National History Museum’s Hintze Hall.
Exceedingly tangentially: who doesn’t love a good coelacanth reference?
I’m not sure what’s most bonkers: the existence of this vehicle at the turn of the last century; its continued existence & operation ~120 years and two world wars later; or the advances in machine learning that allow this level of film restoration & enhancement:
Denis Shiryaev of Neural Love then took the original footage and used a neural network to upscale it to 4K. He also colorized it, stabilized it, slowed it down to better represent real-time, and boosted the frame rate to 60fps.
I often say there’s “working at Google” and then there’s “WORKING AT GOOGLE.” I of course just “work at Google,” but folks like this are doing the latter. With so many Google & Adobe friends directly affected & evacuating, I love seeing smart folks putting their talents & resources to work like this:
Check out the Google blog for lots of interesting info on how all this actually works. It’s now showing up in specific new features:
Today we’re launching a new wildfire boundary map in Search and Maps SOS alerts in the U.S. to provide deeper insights for areas impacted by an ongoing wildfire. In moments like a growing wildfire, knowing exactly where a blaze is underway and how to avoid it is critical. Using satellite data to create a wildfire boundary map, people will now see the approximate size and location right on their phone or desktop.
When people look for things like “wildfire in California” or a specific fire like “Kincade fire” in Search, they will be able to see a wildfire’s approximate boundary of the fire, name and location, as well as news articles and helpful resources from local emergency agencies in the SOS alert.
On Google Maps, people will have access to the same details, including the fire boundary, and receive warnings if they’re approaching an active blaze. If someone is exploring an area near a wildfire on Google Maps, they’ll get an ambient alert that will point them to the latest information.
Just in time for our boys as they level up their math skills:
When they’re stuck on a homework problem, students and parents can use Socratic and soon can use Google Lens to take a photo of a problem or equation they need help with. Socratic and Lens provide quick access to helpful results, such as step-by-step guides to solve the problem and detailed explainers to help you better understand key concepts.
Meanwhile, 3D in Search now covers a bunch of STEM-related topics:
Longtime VFX stud Fernando Livschitz (seeprevious) has turned to 2D, making spray-painted cutouts derived from a real dancer in order to create this delightful little animation. It’s only 30s long, but the subsequent making-of minute is just as cool:
The stop-motion dancers remind me of the brilliant MacPaint animations (e.g. of Childish Gambino) from Pinot Ichwardardi, who happened to say this about low-fi tech:
I know 2020 sucks a whole lot of ass (just this morning we learned that the beloved Swanton Pacific Railroad for kids may have burned up, JFC…), but it’s good to remember the amazing bits of human progress that sometimes come to life—like this one:
Building on the helpfulness of Pixel Buds’ conversation mode translate feature, which helps when you’re talking back and forth with another person, the new transcribe mode lets you follow along by reading the translated speech directly into your ear, helping you understand the gist of what’s being said during longer listening experiences.
Launching initially for French, German, Italian and Spanish speakers to translate English speech, transcribe mode can help you stay present in the moment and focus on the person speaking.
And your headphones can even detect a crying baby (!) & lower volume:
If your dog barks, baby cries or an emergency vehicle drives by with sirens ringing, Attention Alerts—an experimental feature that notifies you of important things happening around you—lowers the volume of your content momentarily to alert you to what’s going on.
The Android Earthquake Alerts System turns your Android phone into a mini seismometer to detect earthquakes when they start. And starting in California, an integration of ShakeAlert in the Android OS enables phones to deliver earthquake alerts for added seconds to drop, cover and hold.
I have no inside info on this one, but it sounds like a positive development. PetaPixel writes,
Google Images is continuing to make changes that benefit photographers. The image search engine is testing a new “Licensable” badge that aims to help photographers sell their photos through search results. […]
By specifying licensing information for the photos on your website, Google will automatically add a new “Licensable” badge to the photo’s thumbnail whenever it shows up in Google Images results. The badge tells viewers that license information is available for the photo.
Back in the day (like, when Obama was brand new in office), I was intrigued by Microsoft’s dual-screen tablet Courier concept. Check out this preview from 2009:
The device never saw production, and some of the brains behind it went on to launch the lovely Paper drawing app for iPad. Now, however, the company is introducing the Surface Duo, and I think it looks slick:
Fun detail I’d never have guessed in 2009: it runs Android, not Windows!
The prices is high ($1400 and up for something that’s not really a phone or a laptop—though something that could replace both some of the time?), and people are expressing skepticism, but we’ll see how things go. Congrats to the folks who persevered with with that interesting original concept.
Fond memories of my childhood attempts to string up bed sheets to make ski slopes for my Lego guys came rushing back as I saw the miniature work of Tatsua Tanaka. As PetaPixel writes,
Tatsuya Tanaka is a master of turning everyday objects into miniature worlds that seem larger than life. He’s been doing it daily for almost a decade, and in the midst of the COVID pandemic, he’s started to integrate some all-too-familiar objects into his work.
We first featured Tanaka’s impressive dioramas six years ago, and believe it or not, he hasn’t stopped. Every day since April 2011 he’s created a new miniature world by pairing high-quality human figurines with everyday objects arranged into fun and creative scenes.
The Fast And The Furious (On A Budget) isn’t quite the glossy Hollywood production as the movie it’s based on. There’s background music that still has an audio watermark repeating at regular intervals, the cars are all little plastic models that get blown up with firecrackers and canned VFX explosions, and the cast is limited to the duo of Yoshimura and Fairy alone, the two of them playing more than a dozen characters between them.
My old teammates keep slapping out the bangers, releasing machine-learning tech to help build apps that key off the human form.
First up is Media Pipe Iris, enabling depth estimation for faces without fancy (iPhone X-/Pixel 4-style) hardware, and that in turn opens up access to accurate virtual try-on for glasses, hats, etc.:
The model enables cool tricks like realtime eye recoloring:
I always find it interesting to glimpse the work that goes in behind the scenes. For example:
To train the model from the cropped eye region, we manually annotated ~50k images, representing a variety of illumination conditions and head poses from geographically diverse regions, as shown below.
The team has followed up this release with MediaPipe BlazePose, which is in testing now & planned for release via the cross-platform ML Kit soon:
Our approach provides human pose tracking by employing machine learning (ML) to infer 33, 2D landmarks of a body from a single frame. In contrast to current pose models based on the standard COCO topology, BlazePose accurately localizes more keypoints, making it uniquely suited for fitness applications…
If one leverages GPU inference, BlazePose achieves super-real-time performance, enabling it to run subsequent ML models, like face or hand tracking.
Now I can’t wait for apps to help my long-suffering CrossFit coaches actually quantify the crappiness of my form. Thanks, team! 😛
Oh man… if some lab were tasked with conjuring peak delicious nerdery right up my & my son’s alleys, they’d stop here & declare victory.
Piloting an ocean exploration ship or Martian research shuttle is serious business. Let’s hope the control panel is up to scratch. Two studs wide and angled at 45°, the ubiquitous “2×2 decorated slope” is a LEGO minifigure’s interface to the world.
These iconic, low-resolution designs are the perfect tool to learn the basics of physical interface design. Armed with 52 different bricks, let’s see what they can teach us about the design, layout and organisation of complex interfaces.
Welcome to the world of LEGO UX design.
Enjoy! [Via Ben Jones, whom I deeply blame for taking me down this rabbit hole]
“Comparison is the thief of joy.” — Theodore Roosevelt “Move your ass, fat boy!” — CrossFit
Okay, CF doesn’t say the latter, at least at my gym, but there’s a lot to be said for having a mix of social support/pressure—which is exactly why I’m happy to pay for CF as well as Peloton (leaderboards, encouragement, etc.).
Now the Ghost Pacer headset promises to run you ragged, or at least keep you honest, through augmented reality:
I love these insights from George Lucas, Francis Ford Coppola, George Spielberg, co-editor Richard Marks, and especially editor Walter Murch. Hearing about the evolution of the sound design surrounding the horse’s head scene & Mary’s death was particularly enlightening.
Noah Snavely is the O.G. researcher whose thesis work gave rise to the PhotoSynth crowd-sourcing imaging tech with which Microsoft blew minds back in the mid-aughts. He’s been at Google for the last several years, and now his team of student researchers are whipping up new magic from large sets of tourist photos:
Might be the best five minutes you’ll spend today:
Though I may not be here with you, I urge you to answer the highest calling of your heart and stand up for what you truly believe. In my life I have done all I can to demonstrate that the way of peace, the way of love and nonviolence is the more excellent way. Now it is your turn to let freedom ring.
Composer Eric Whitacre’s has been gathering virtual choirs for years, and with surging interest in this time of Corona, some 17,572 singers from 129 countries came together to perform his “Sing Gently.” Don’t be put off by the 10-minute running time of vid below, as the song last just a couple of minutes followed by numerous credits:
The Virtual Choir team uses every video submitted, unless there’s a technical problem with the recording. That means there are thousands of videos to sync together, and thousands of sound recordings to edit so the result sounds seamless. This time around, the team featured three sound editors, six people reviewing each submission and two executive producers; the team was scattered through the U.S., the U.K. and South Africa.
Across three different continents, they used Google Docs and Google Sheets to keep track of their progress, Google’s webmaster tools to manage thousands of email addresses and Google Translate to keep in touch with singers around the world. Singers checked the choir’s YouTube channels for rehearsal videos, footage of Whitacre conducting the song and Q&As with other singers and composers.
The tools for drawing out lighting strikes & lens flares look really fun. Of the whole suite PetaPixel writes,
Optics is described as “the definitive digital toolbox for photos,” but what it offers is maybe better described as a comprehensive mishmash of filters, presets, lighting effects and lens flares… with some masking technology thrown in for good measure. It’s honestly hard to tell what Optics is primarily meant to do, because it does so much.
Here, check it out:
If you’re curious and want to try out Optics, you can learn more about the plugin and/or download a free trial on the Boris FX website. And if you actually want to buy a copy for yourself, you can purchase a permanent license for $149, an annual subscription for $99, or a monthly subscription for $9.
On YouTube the company notes, “25% off permanent licenses and subscription options. Use coupon code: optics25.”