In a recent experiment, Prague-based photographer Dan Vojtech decided to try out different focal lengths on the same portrait photo of himself and log the effects it had on it. The difference between 20mm and 200mm are unbelievable. So next time someone says that the camera adds 10 pounds, they’re not entirely wrong – it all depends on the equipment used.
From what I’ve tasted of desire, I hold with those who favor fire…
Wild that this can be captured on what David Lynch might call “your f***ing telephone“; wild too that it’s shared as vertical video (by Apple, which after 10+ years can’t be bothered to make iMovie handle this aspect ratio decently!)
I left Adobe in early 2014 part due to a mix of fear & excitement about what Google was doing with AI & photography. Normal people generally just want help selecting the best images, making them look good, and maybe creating an album/book/movie from them. Accordingly, in 2013 Google+ launched automatic filtering that attempted to show just one’s best images, along with Auto Enhancement of every image & “Auto Awesomes” (animations, collages, etc.) derived from them. I couldn’t get any of this going at Adobe, and it seemed that Google was on the march (just having bought Nik Software, too), so over I went.
Unfortunately it’s really hard to know what precisely constitutes a “good” image (think shifting emotional valences vs. technical qualities). For consumers one can de-dupe somewhat (showing just one or two images from a burst) and try to screen out really blurry, badly lit images. Even so, even consumers distrust this kind of filtering & always want to look behind the curtain to ensure that the computer hasn’t missed something. Therefore when G+ Photos transitioned into just Google Photos, the feature was dropped & no one said boo. Automatic curation is still used to suggest things like books & albums, but as you may have seen when it’s applied to your own images, results can be hit or miss.
The plugin is powered by the Canon Computer Vision AI engine and uses technical models to select photos based on a number of criteria: sharpness, noise, exposure, contrast, closed eyes, and red eyes. These “technical models” have customizable settings to give you some ability to control the process.
Check out how Anne Dattilo, a PhD student in astronomy and Astrophysics, collaborated with Google TensorFlow folks to use machine learning to discover new planets (!):
This is the story of the student who became a planet hunter. When Anne Dattilo attended a guest lecture at the University of Texas she had no idea it would be the start of a journey involving complex algorithms, a space telescope breaking down in orbit, a trip to an observatory in the Chihuahuan desert and, finally, the discovery of two new planets.
We’re excited to announce that this year’s theme is “I show kindness by…” Acts of kindness bring more joy, light and warmth to the world. They cost nothing, but mean everything. .
As submissions open, we’re inviting young artists in grades K-12 to open up their creative hearts and show us how they find ways to be kind. […]
This year’s national winner will have their artwork featured on the Google homepage for a day and receive a $30,000 college scholarship. The winner’s school will also receive a $50,000 technology package.
“Charlie enters the costume by crawling underneath, and there is a pair of shoulder straps that she uses to lift the entire costume,” their parent who uses the screen name Brandoj23 wrote on Imgur this week. “The costume looks heavier than it is. It’s almost entirely made of foam and foam board.”
The antennae are made from coat hangers and bamboo dowels. The attitude thrusters are made from disposable wine flutes. The gold foil is made from a gold space blanket material.
“The front hatch magnetically closes and magnetically stays open, and doubles as a candy sample input port,” Brandoj23 added. “The ascent stage (top part) separates from the descent stage (bottom part with landing pads).”