More powerful & easier to use; I’ll let the team explain:
Snapseed 2.13 started rolling out today. This version includes an improved UI for selecting and changing parameters. In addition to swiping up and down to choose parameters, you can now also tap the adjust icon on the bottom bar to bring up a tap-enabled parameter selector. The selected parameter will always be shown as a slider at the top of the screen. To adjust the parameter you can still swipe left and right anywhere on the screen as before.
On iOS, this release now also includes the dedicated White Balance tool that got launched on Android a while ago. This tool allows you to adjust the colors in your photo to look more natural. Just choose the auto correct option, or use the included color picker for fine control.
Finally, when opening raw images that have been captured with a creative setting on your camera Snapseed will now show the default raw colors. Previously the embedded color profiles sometimes limited the creative choices in editing your raw files.
Your feedback is, as always, most welcome!
Just go try this thing, or watch the short demo below. Meanwhile I’ll be trying to pry my guffawing 8-year-old Finn away from the iPad. :-p
Of course, AI Experiments isn’t just a free education for neural network nitwits. Every interaction, be it with Quick, Draw! or one of the other applets in this virtual playground, improves Google’s ability to more nimbly recognize images and language. That makes the company’s products stronger, but it also services users. The data fuels apps like Google Photos, which uses AI to swiftly organize all your pictures. It’s a system of give and take—and with games like Quick, Draw!, it’s fun, too
The Micronaxx really want this guy (currently on Kickstarter). (Too bad I didn’t get an autograph from creator Brent Bushnell, who keynoted Google I/O Youth this year.):
Oomiyu (pronounced “umiyoo”) is a maker kit that allows beginner inventors (8 years and up) to build a fun, customized and interactive paper craft robotic owl while getting a taste of basic mechanical principles, electronics and programming based on an Arduino 101.
Fontself looks kinda rad. $49 for Photoshop or Illustrator; $89 for both. (See, I knew there was a reason I fought like a crack-fueled mongoose to enable panel extensibility in Adobe apps. :-))
By day Googler Sriram Murali keeps spam out of your inbox; by night he captures thrilling images of the stars whirling past us—or rather, of us whirling past them:
Lost in Light, a short film on how light pollution affects the view of the night skies. Shot mostly in California, the movie shows how the view gets progressively better as you move away from the lights. Finding locations to shoot at every level of light pollution was a challenge and getting to the darkest skies with no light pollution was a journey in itself. Here’s why I think we should care more.
The night skies remind us of our place in the Universe. Imagine if we lived under skies full of stars. That reminder we are a tiny part of this cosmos, the awe and a special connection with this remarkable world would make us much better beings – more thoughtful, inquisitive, empathetic, kind and caring. Imagine kids growing up passionate about astronomy looking for answers and how advanced humankind would be, how connected and caring we’d feel with one another, how noble and adventurous we’d be. How compassionate with fellow species on Earth and how one with Nature we’d feel. Imagine a world where happiness of the soul is more beautiful. Ah, I feel so close to inner peace. I can only wonder how my and millions of other lives would have changed.
On a related note, check out how two towns in Colorado have become a haven for star watchers.
So, now that you’ve downloaded PhotoScan & digitized a bunch of images, how can you give them proper dates? Here’s how:
On photos.google.com, just select the group of photos you’d like to adjust and click “Edit date & time” in the menu dropdown. You’ll be able to shift or set the time stamps, and preview the changes before saving.
Here’s a quick video demo (showing how to edit one image, but applicable to multiple simultaneously):
Moodstocks, acquired a few months ago, looks rather amazing. Here’s an extremely short demo:
Old Man Nack needs a trigger warning for stuff like this… There’s almost no better way for a Photoshop PM to break his or her own heart than to ship new painting features. I remember crestfallen Kevin Connor after Photoshop 7 introduced a ton of new power, and when we shipped the 3D Mixer Brush in CS5, we heard crickets again. Even amazing tech like MoXi, Fresh Paint, and Expresii, and Mischief never seems to find much of an audience. Most people can’t paint, will never paint, and DGAF.
But, what the heck? WetBrush looks cool, and the 3D extrusion looks like a good fit for the laser carving/printing techniques Russell Brown debuted a few years back.
Style transfer gets super fast & interactive:
The latest work from Google Brain, however, makes style transfer almost trivial to compute… Instead of learning the look of a single painting, the new style transfer network learns the style shared by multiple paintings by the same artist. […]
In this case, it’s all being done by one super-efficient neural network that knows and can combine dozens of styles based on lower-level features. That may sound academic, but it’s actually a major step forward — a highly generalizable model.