[W]hen you put in two photographs, the neural network-powered program analyzes the color and quality of light in the reference photo, and pastes that photo’s characteristics onto the second. This includes things like weather, season, and time of day—theoretically, a winter’s day can be turned into summer, or a cloudy day into a glorious sunrise. […]
It’s important to note that the software does not alter the structure of the photo in any way, so there’s no risk of distorting the lines, edges or perspective. The entire focus is on mimicking the color and light in order to copy the “look” or “style” of a reference photograph onto a new shot.
It’ll be fun to take this next generation for a spin (hopefully soon!).
On a related (?) note, I enjoyed this tweet: “This is the first thing i’ve seen an AI do that has truly terrified me: [see below]”
Children can freely upload their ideas to this website for positive and encouraging feedback from the Little Inventors team… Our aim is to inspire and support children around the world to use their wonderful imaginations to think up ingenious, fantastical, funny or perfectly practical invention ideas with no limits!
Check out this rather brilliant Kickstarter project from some Stanford scientists on a mission to broaden access to the wonders of exploring our world:
Foldscope is a real microscope, with magnification and resolution sufficient for imaging live individual cells, cellular organelles, embryos, swimming bacteria and much more. Because the Foldscope is so affordable and can be used anywhere, it brings science to your daily life, whether that means looking at what’s growing in your flower pot or watching bacteria from your mouth or analysing the bee stinger that got your thumb. Our goal is to encourage and enable the curious explorer in each of us and make science happen anywhere, anytime.
Wanna actually go to Mars & feel perpetually jetlagged? Hmm—while thinking that over, take a beautifully painterly flight over the planet surface, courtesy of Jan Fröjdman working with real NASA data:
The anaglyph images of Mars taken by the HiRISE camera holds information about the topography of Mars surface. There are hundreds of high-resolution images of this type. This gives the opportunity to create different studies in 3D. In this film I have chosen some locations and processed the images into panning video clips…
It has really been time-consuming making these panning clips. In my 3D-process I have manually hand-picked reference points on the anaglyph image pairs… The colors in this film are false because the anaglyph images are based on grayscale images. I have therefore color graded the clips. But I have tried to be moderate doing this.
It’s not glamorous, but optimizing apps for low-bandwidth environments is critical to democratizing access to their benefits. Having traveled in Nepal, I can tell you that all the cool creations in the world don’t matter if you can’t even back up & share your photos.
Now your photos will back up automatically in a lightweight preview quality that’s fast on 2G connections and still looks great on a smartphone. And when a good Wi-Fi connection becomes available, your backed up photos will be replaced with high-quality versions. We’re also making it easier to share many photos at once even on low connectivity. Never mind if you’re at the beach or hiking in the mountains, with Google Photos you can now share pictures quickly even with a spotty connection by sending first in low resolution so friends and family can view them right away. They’ll later update in higher resolution when connectivity permits.
I love the animals created for this piece for Sherwin-Williams:
“In 2009,” writes the team at Buck, “we were asked by our friends at McKinney to explore a world made of color, literally. The Sherwin colors themselves are the cast of their own story of infinite possibility, taking us places that spark our sense of curiosity, exploration, and expression.”
Here’s a cool contest to celebrate the 25th (25th!!) anniversary of Adobe Premiere Pro:
Download exclusive, uncut music video footage and work with Adobe Premiere Pro CC to create your own edit of the video for their new hit song “Believer.” Deadline is April 8th.
A panel of luminary judges… will select the ultimate winner of the $25,000 Grand Prize and bragging rights.
We’re also awarding bonus prizes of $1,000 each and a year-long subscription to Creative Cloud for four special categories: Fan Favorite, Most Unexpected, Best Young Creator, and Best Short Form. And one special bonus prize of $2,500, a year-long subscription to Creative Cloud and 25 Adobe Stock credits for the cut with the best use of supplied Adobe Stock clips.
Hmm—I’m not entirely sure what to make of Opera Neon, but props to them for looking to shake up some largely staid interaction patterns. For example:
Opera Neon’s newly developed physics engine is set to breathe life back into the internet. Tabs and other objects respond to you like real objects; they have weight and move in a natural way when dragged, pushed, or even popped.
Cream floats to the top, and so do your favorite tabs; Opera Neon’s gravity system pulls your most used tabs to a prominent position on your Speed Dial.
Come on, who hasn’t watched the Gimp scene in Pulp Fiction and thought to himself, “I wish I could look like that around the office!” I will, however, give personal muffler HushMe props if it makes the user sound like Bane:
Hmm, interesting—I honestly had no idea that Sony cameras could install apps, but in retrospect the idea seems blindingly obvious: Why not be able to modify your light-capturing computer like this? PetaPixel writes,
Actually, it’s more than a grad. When you open up the app, you get several options: Graduated ND, Reverse Graduated ND, Color Stripe, Blue Sky, Sunset, and two Custom options for setting up your own presets. The presets will capture preset exposure and white balance values, and if you pick Custom, you can adjust the location and feathering of each boundary, the effect above and below that boundary, and more!
Check out the rather brilliant-looking, Lego-compatible Nimuno Loops:
Imagine being able to build around corners, on curved surfaces, or even onto the sides of that sailing ship you’ve just spent hours building. You forgot to engineer a point of attachment for that sweet dinosaur-smashing cannon? No problem. Snip a length of Nimuno Loops, stick it on the hull, mount your cannon and be on yarr way.
How lucky it was for the world that a brilliant graphics engineer (PostScript creator & Adobe co-founder John Warnock) married a graphic designer (Marva Warnock) who could provide constant input as this groundbreaking app took shape. Those were the days, when the app splash screen listed the whole team of four engineers who’d built it—one of whom was the CEO.
Watch the Illustrator story unfold, from its beginning as Adobe’s first software product, to its role in the digital publishing revolution, to becoming an essential tool for designers worldwide. Interviews include cofounder John Warnock, his wife Marva, artists and designers Ron Chan, Bert Monroy, Dylan Roscover and Jessica Hische.
It’s fun to see all these old friends celebrating an old friend. It takes me back to when I uploaded a copy of the VHS tape (hosted by John himself!) that shipped in the box with Illustrator 1.0:
There’s no longer any need to disrupt the animals’ habits and habitat using artificial light; thanks to advances in camera sensors and non-visible spectrum capture, the BBC is shooting the kind of wildlife footage that was simply unimaginable in the 80s and 90s. […]
The Vox video dives into the challenges nature documentaries like Planet Earth used to have back in the days of film, and then advances rapidly through the decades until we reach the jaw-dropping footage shot for Planet Earth II using infrared technology, thermal imaging, and incredible low-light cameras like Sony’s famed A7s.
I’m so pleased to see my Adobe friends redefining what’s possible in terms of mobile photo capture on Android & iOS, now enabling you to capture bracketed raw (DNG format) images and merge them into a high dynamic range master. Filmmaker Stu Maschwitz dives into the details via his blog, and Russell Brown provides a tour below:
After our Scottish Buddhist friend Bruce Fraser passed away ten years (!!) ago, a group gathered in SF to celebrate his life. Graham Nash, if I remember correctly, described the moment of clarity Bruce had that convinced him to change up his life. For some people that “moment” is vague, but for Bruce, Graham said, it was very clear: he was playing “That’s The Way (Uh Huh Uh Huh)” in a crummy Scottish bar band, and between “Uh” and “Huh,” he said, “I’ve gotta get the f*** out of here.”
Pay attention to these moments. I’m just sayin’. 🙂
Photographer and educator Seán Duggan shares a collection of power tips that can help you get the most out of Google Photos. Learn how to manage photo storage, use the stellar search capabilities of Google Photos, edit your photos, and make animations, slide shows, and movies from your images. Plus, learn how to share photos securely with friends and family.
So, this happened. 🙂 In Snapseed 2.16 on iOS & Android, you can:
Edit faster by using reusable “looks”: save the edits on any photo as a look, and apply saved looks to other images.
Share looks with friends and other users by generating a QR code for each.
Apply Structure to individual areas of your photo via the Selective tool.
And on Android you can:
Automatically correct the perspective of your photos using the the enhanced Perspective tool.
Find inspiring tutorial content via the Insights stream. [already available on iOS]
The QR-based sharing is a fun twist. The team writes,
You now can easily share these looks with your friends and followers. Snapseed will generate a QR code that embeds your look. Scan this QR code [below] in Snapseed to apply the look to the current photo. You can easily share it through social media, on your web site, or by email and instant messaging!
Since each frame has to ensure the blade is in the same position as the last it therefore needs to be in sync with the rpm of the rotar blades. Shutter speed then needs to be fast enough to freeze the blade without too much motion blur within each frame.
Here the rotor has five blades, now lets say the rpm of the rotor is 300. That means, per rotation, a blade is in a specific spot on five counts. That gives us an effective rpm of 1500. 1500rpm / 60secs = 25.
Therefore shooting at 25fps will ensure the rotor blades are shot in the same position every frame. Each frame then has to be shot at a fast enough shutter speed to freeze the blade for minimal motion blur.
Tangentially related: Lance Armstrong cycling without pedaling:
I know this is soooo several days ago, but this interactive pix2pix demo (running atop Google’s TensorFlow machine learning framework) is good fun for making the stuff of (cute?) nightmares: Try sketching a cat, handbag, or shoe, then let the system try to create a photographic rendition by drawing from a large image set. Try it out and enjoy!
A lesson in aerodynamics, for instance, would start when students strap on a VR headset, like Google Cardboard or Daydream. Their teacher could then demonstrate how aerodynamics works in mixed reality before the kids remove their headsets and get to work designing windmill arms, working with their hands to create something they think will generate the most wind speed. Then, on goes the headset again. As students begin testing their windmills with a fan, embedded sensors in the windmill spindle record rotational speed, and the headset shows the students the speed of their mills.
Moment’s John Payne says,
“VR is often simply reduced to a storytelling medium, but we believed it could be used in a more integrated way with the real-world environment, more as a ‘tool’ than as an ‘experience.'”
It’s undoubtedly cool, and I’d love to see how students and teachers can put it to use. And beyond that, I’d love to see the tools that’d make it possible for thousands of other lessons (needed to fit a wide range of curricula) to be made in an economically sustainable way.
I’d characterize my outlook at guardedly optimistic. I’m reminded me of when CD-ROM based magazines arrived, and then when tablet-based magazines repeated the whole fantasy of “Now everyone will build/pay for rich, interactive 3D content!” They even debuted with a 3D windmill, for God’s sake. Of course the world moved differently, voting with its feet more for Snapchat stories (crude assemblies of unedited clips, shat out even by well funded orgs like the NYT) than for highly polished, immersive creations.
And yet hope dies last, and all of us toolmakers have the privilege of trying to rebalance the scales. If it weren’t hard, it probably wouldn’t be fun. 🙂
In this video, we see a couple more tools the team used to facilitate the making of the film. The first is a VR video game of sorts that ILM built so that Edwards could move a virtual camera around in a virtual set to find just the right camera angles to capture the action, resulting in a process that was more flexible than traditional storyboarding.
The second tool jumped around a virtual set — a complete digital model of Jedha City — and rendered hundreds of street views from it at random. Then the filmmakers would look through the scenes for interesting shots and found scenes that looked more “natural” than something a digital effects artist might have come up with on purpose — basically massively parallel location scouting.