There are 686 light painting photographs that make up the 11-scene project. Each of these long exposure light painting photographs are straight out of the camera and arranged side by side to create motion.
My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.
Implement the features and user interfaces of our AI-driven product
Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
Architect efficient and reusable front-end systems that drive complex web/mobile applications
BS/MS in Computer Science or a related technical field
Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
Strong experience working with build tools such as Rush, Webpack, npm
Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
Familiar with scripting languages, such as Python
Ability to take a project from scoping requirements through actual launch of the project.
Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.
My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!
Last year I enjoyed creating a 3D dronie during my desert trip with Russell Brown, flying around the Pinnacles outside of Trona:
This year I just returned (hours ago!) from another trip with Russell, this time being joined by his son Davis (who coincidentally is my team’s new UI designer!). On Monday we visited the weird & wonderful International Car Forest of the Last Church, where Davis used his drone plus Metashape to create this 3D model:
I swear to God, stuff like this makes me legitimately feel like I’m having a stroke:
And that example, curiously, seems way more technically & aesthetically sophisticated than the bulk of what I see coming from the “NFT art” world. I really enjoyed this explication of why so much of such content seems like cynical horseshit—sometimes even literally:
As I’ve noted previously, I’m (oddly?) much more bullish on Snap than on Niantic to figure out location-based augmentation of the world. That’s in part because of their very cool world lens tech, which can pair specific experiences with specific spots. It’s cool to see it rolling out more widely:
The first Lens is a new AR experience that takes users through the story of Asian-American businesswoman Lucy Yu, the owner of ‘Yu & Me Books’ in NYC, which is an independent bookshop that’s dedicated to showcasing stories from underrepresented authors.
And for one that’s more widely accessible,
Snap’s also added a new Year of the Tiger Lens, which uses Sky Segmentation technology to add an animated watercolor tiger jumping through the clouds.
If you’re passionate about making artificial intelligence work fairly & responsibly for everyone, and if you’re considering taking a new role, check out this job listing in case it’s a good fit:
The Office of Ethical Innovation (AI Ethics) drives organization-wide ethics related activities and develops processes, tools, training, and other resources to ensure that our AI solutions consistently reflect Adobe’s AI ethics principles: accountability, responsibility, and transparency. AI is critical to our Adobe products, and we continue to see AI leveraged in more innovative ways. AI Ethics needs to be top of mind from research to product release.
What You’ll Do
Develop technical solutions to support AI ethics mandates by collaborating with our data scientists and machine learning engineers across the organization.
Translate fairness, explanability, or robustness mandates into technical guidelines and engineering requirements that will operate at scale.
Mentor teams on negative bias issues in the areas of Big Data, Natural Language Processing, Knowledge Mining, Deep Learning, Classification, GANS and Computer Vision.
Partner closely with product teams, research, diversity and inclusion, quality assurance, legal, and other partners to define and implement responsible AI practices throughout the global organization.
Work in tandem with the AI Ethics Review Board to ensure AI-powered features are designed for inclusivity; are guided by our ethics principles; and embody outcomes that respect our customers.
Distill recommendations and findings from AI Ethics reviews to proactively improve our products.
Develop and deliver internal ethical AI training programs to enhance awareness and adoption of ethical practices.
Conduct practitioner research on the trends, advancements and standard methodologies in AI, machine learning, software development specific to ethics and social responsibility across industry and academia.
Some 20+ years ago (cripes…), 405: The Movie became a viral smash, in part thanks to the DIY filmmakers’ trick of compositing multiple images of the busy LA freeway in order to make it look deserted.
Now (er, 8 years ago; double cripes…) Russell Houghten has used what I imagine to be similar but more modern techniques to remove car traffic from the streets, freeing up the concrete rivers for some lovely skateboarding reveries:
I’m headed out to Death Valley on Friday for some quick photographic adventures with Russell Brown & friends, and I’m really excited to try photographing with burning steel wool for the first time. I’m inspired by this tutorial from Insta360 to try shooting with my little 360º cam:
“Just don’t be horrendously disappointed if it doesn’t turn out quite like this,” advises Henry, my 12yo aspiring assistant. Fair enough, dude—but let’s take it for a spin!
If you’ve ever shot this way & have any suggestions for us, please add ’em in the comments. TIA!
“I’m like, ‘Bro, how much furniture do you think I buy??'”
I forget who said this while I was working on AR at Google, but it’s always made me laugh, because nearly every demo inevitably gets into the territory of, “Don’t you wish you could see whether this sofa fits in your space?”
Still, though, it’s a useful capability—especially if one can offer a large enough corpus of 3D models (something we found challenging, at least a few years back). Now, per the Verge:
Pinterest is adding a “Try On for Home Decor” feature to its app, letting you see furniture from stores like Crate & Barrel, CB2, Walmart, West Elm, and Wayfair in your house… According to the company’s announcement post, you’ll be able to use its Lens camera to try out over 80,000 pieces of furniture from “shoppable Pins.”
Hmm—I always want to believe in tools like this, but I remain skeptical. Back at Google I played with Blocks, which promised to make 3D creation fun, but which in my experience combined the inherent complexity of that art with the imprecision and arm fatigue of waving controllers in space. But who knows—maybe Shapes is different?
I was really pleased to see Google showcase the new Magic Eraser feature in Pixel 6 marketing. Here’s a peek at how it works:
I had to chuckle & remember how, just after he’d been instrumental in shipping Content-Aware Fill in Photoshop in 2010, my teammate Iván Cavero Belaunde created a tablet version he dubbed “Trotsky,” in mock honor of the Soviet practice of “disappearing” people from photos. I still wish we’d gotten to ship it—especially with that name!
Update: Somehow Iván still has the icon after all these years: