Who doesn’t like a good Rube Goldberg contraption? Of his Josh Sheldon writes,
I made this robot to make light painting animations.
Each of the animations I made took between 4 and 12 hours to shoot, one frame at a time. Each frame is 1-3 long exposure photographs of the machine performing the light painting.
Check it out (and if you’re impatient like me, jump ahead ~3 minutes to start seeing the generated artwork):
This research promises to “simultaneously capturing the large-scale body movements and the subtle face and hand motion of a social group of people,” all without using mocap markers:
What if your color palettes came alive, giant size & in 3D space? FastCo writes,
The installation is almost like three-dimensional graffiti. Set in the ruins of Les Baux, which date back to antiquity, the piece has a light footprint: The artists simply used metal rods to hang pieces of semitransparent textile patches of different sizes along the square. The resulting gradients of color are reminiscent of digital color palettes like RGB or CMYK.
Today we’re announcing that VR Creator Lab is coming to London. Participants will receive between $30,000 and $40,000 USD in funding towards their VR project, attend a three day “boot camp” September 18-20, 2018, and receive three months of training from leading VR instructors and filmmakers.
Applications are open through 5pm British Summer Time on August 6, 2018. YouTube creators with a minimum of 10,000 subscribers and independent filmmakers are eligible.
Every day some friends of mine toil (in the loosest sense of the word) to invest Google Assistant with personality that provides real moments of delight. David Pogue met with the team to find out how it works:
“We actually have a team of writers from around the world to vet as much as we can the cultural appropriateness of the material that we put out,” Germick says. “Germans, we find, don’t particularly appreciate wordplay, in the pun sense. So our German writers need to work a different angle.” [“Awkward!!” —J.]
Fortunately for the Personality team, a principle they call “Fun in, fun out” is at play here. If you prefer an assistant without a helping of humor, you’ll never encounter it. If all you ever say to Assistant is “Set a timer for 15 minutes” and “Who was the third President?”, you won’t run into much of Assistant’s personality.
“I soon realized that the wide angle lens gives the iPhone and incredibly close focus point, allowing me to capture hard-to-pull-off wide-angle macro photos and videos,” Torres tells PetaPixel. “I set my iPhone to 240fps on 1080p (which my Canon 1DX Mark II can’t even handle), put on the wide angle lens, set it next to a hummingbird feeder in the cloud forests of Sumaco, and pressed record.”
Man, I sure love being a dad. Our resident railfan & little old man Henry (age 9) loves to get us out biking to watch the evening parade of trains (Cal, Amtrak, ACE, freight), and tonight we brought my drone. I’m fond of this shot, with accompaniment kindly provided by Eels:
And what can I say: our in-house editor (age 10) insisted on the closing title. 😌
And just for yuks, here’s the scene in 360º pano form:
For the past four years, The Ocean Agency has revealed the ocean to the world through Google Street View. Along the way, we’ve encountered a few unexpected guests. Follow along as our dive team encounters the world’s largest, most dangerous and most surprising sharks.
Apropos of Google’s Move Mirror project (mentioned last week), here’s a similar idea:
Kinemetagraph reflects the bodily movement of the visitor in real time with a matching pose from the history of Hollywood cinema. To achieve this, it correlates live motion capture data using Kinect-based “skeleton tracking” to an open-source computer vision research dataset of 20,000 Hollywood film stills with included character pose metadata for each image.
The notable thing, I think, is that what required a dedicated hardware sensor a couple of years ago can now be done plug-in-free using just a browser and webcam. Progress!
The P1000 is off the chain. “It starts a little wider than your typical smartphone camera lens,” says PopPhoto, “and can zoom far enough that you can focus on objects that are literally miles away.” Nikon says,
“We could in theory design the same spec lens for a DSLR, but it would be nearly impossible to create… [A] 3000mm lens with a maximum aperture of f/8 built for a DSLR sensor would need to have a front lens element with a diameter of about 360mm (more than 14 inches)!”
Fore-edge painting renders a scene on the edges of the pages of a book, and Martin Frost might be the last remaining professional fore-edge painter in the world. Here’s a peek at his vanishing craft:
Dating back centuries, the delicate art form places intricate scenes on the side of books, cheekily hidden beneath gold gilded pages. The beautiful paintings are only visible to the trained eye, but once you unlock the secret, you’ll find pure magic.
Unleash the dank emotes! My teammates George & Tyler (see previous) are back at it running machine learning in your browser, this time to get you off the couch with the playful Move Mirror:
Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.
Welcome to the era of the software-defined camera.
In this era, pocketable, connected cameras can reconstruct the world in three dimensions and superhuman detail, cars are able to perceive the objects around them without the need for special sensors, and robots are able to thread the elusive needle autonomously.
Light’s highly accurate depth mapping can be used to create rich and complex environments for a wide range of applications including augmented reality.
I enjoy crapping on pointlessly voice-driven apps, but in this case Google’s Poster Maker—a demo for how Assistant can control devices—is so gleefully silly, I can dig it:
Local designers will share their favorite tools for a range of animation software including After Effects and Cinema4D in short micro talks. Take your animations to the next level.
First several guests will receive a YouTube-branded Google Cardboard unit 🙂
Back in the way back, the Adobe User Ed team got in trouble for publishing a Healing Brush tutorial that demonstrated how to remove watermarks (sorry, photographers!). Now bots promise to do the same, only radically faster & better:
“Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images,” NVIDIA writes. “The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.
“Without ever being shown what a noise-free image looks like, this AI can remove artifacts, noise, grain, and automatically enhance your photos.”
“Oh, is that True Love Waits conference?” my friend once snarkily asked as we drove past GPU conference attendees milling around downtown San Jose. “Is this Virgin-con?” Their dorktastic style comes to mind seeing demos for the helmet-mounted Wunder360.
Given that my trusty, if imperfect, Theta S 360º camera has gone MIA, I’m thinking about possible replacements. Having busted on the Wunder a bit, I’ll say I’m intrigued by the mapping possibilities. Given all it promises (especially relative to, say, the $499 Rylo camera), I’d worry that it’s oversold, especially at $159—but I guess we shall see.
The device promises:
Capturing 360 videos with in-camera stitching, no extra post-production software is needed;
Easy 3D scanning, enables the ability to create in 3D for everyone;
AI-powered smart tracking, locks on your favorite view;
Super smooth stabilization, say goodbye to shaky shots;
Compact, lightweight and portable, pop the S1 in your pocket;
With 100ft waterproof case, S1 works with you anywhere;
The fully trained PixelPlayer system, given a video as the input, splits the accompanying audio and identifies the source of sound, and then calculates the volume of each pixel in the image and “spatially localizes” it — i.e., identifies regions in the clip that generate similar sound waves.
Man, I love it when people take wacky “Y’know what’d be really crazy/cool…” ideas seriously and actually do them. To promote their new notebook sets that come with papercraft spacecraft, Field Notes actually sent one of the little guys into space (or something very close by). Bananas:
Each 3-Pack consists of three Memo Books, one each for the Mercury, Gemini, and Apollo programs. These books are full of facts and figures. They feature dramatic photographs of iconic moments from those missions on the covers…
Additionally, you’ll notice these 3-Packs come in a slightly larger package than usual. That’s because each set also contains three “Punch-Out and Assemble” Mission-Specific Crew Capsule Models, for fun and education.
You know what’s really hard? Flying steadily in one direction while smoothly sweeping the camera around to focus on a subject and maybe climbing/descending and maybe tilting the camera? Yeah, just kidding: it’s nearly impossible.
But maybe now*, through the use of Course Lock mode & with this guidance from Drone Film Guide, I can pull it off.
In a nutshell:
Pick a heading & speed
Start flying back & forth along this fixed path while varying rotation/height/tilt
Dial down the sensitivity of your yaw control
In a second installment, Stewart goes into more detail comparing Course Lock to Tap Fly:
*”Now” is relative: Yesterday my luck finally ran out as I flew the Mavic into some telephone wires. At least it’s not at the bottom of Bixby Canyon or Three-Mile Slough, where other power lines threatened to put it on previous (mis)adventures. (“God helps old folks & fools…”) The drone took a hard bounce off the pavement, necessitating a service trip to reset the gimbal (which moves but now doesn’t respond to control inputs), but overall it’s amazingly sturdy. 💪😑
My 8-year-old and 42-year-old selves just high-fived & it was glorious. Oh my God, I am so here for this. Check out the animation below & the making-of thread here.
Wow: You can fly through some amazing goals thanks to the Times graphics staff using 3D illustration package Mental Canvas to convert single images into 3D videos. Check it out (click here if the vid below doesn’t load):
Somehow I’d never heard of Mental Canvas previously. Looks rather amazing:
After busting my ass there for two solid years, I came within hours of being laid off from Adobe, only to be saved by Russell Brown. During that purgatorial period, my soon-to-be-ex boss Michael Ninness tossed me the bone of updating his Photoshop keyboard shortcuts book for PS7. I was grateful for the gig (as who knew what lay next?—certainly not getting a call to work on Photoshop!), and ever since I’ve had a particular soft spot for anyone working to make Adobe shortcuts more comprehensible. Enter Shutterstock:
[W]e created a handy printable chart for all the most common and useful shortcut key combos* in the big-three Adobe design programs (Photoshop, Illustrator, and InDesign). It’s color-coded, labeled, and grouped for maximum efficiency. We call it the Periodic Table of Adobe Keyboard Shortcuts, and we’re letting you download it here, totally free.