There are 686 light painting photographs that make up the 11-scene project. Each of these long exposure light painting photographs are straight out of the camera and arranged side by side to create motion.
My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.
Implement the features and user interfaces of our AI-driven product
Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
Architect efficient and reusable front-end systems that drive complex web/mobile applications
BS/MS in Computer Science or a related technical field
Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
Strong experience working with build tools such as Rush, Webpack, npm
Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
Familiar with scripting languages, such as Python
Ability to take a project from scoping requirements through actual launch of the project.
Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.
My son Henry & I were super hyped to join Russell Brown & his merry band last Monday at Nevada’s deeply weird International Car Forest of the Last Church for some fire photography featuring pyrotechnic artist Joseph Kerr. As luck would have it, I had to send Henry on ahead with little notice, pressing my DSLR on him before he left. Happily, I think he did a great job capturing the action!
Last year I enjoyed creating a 3D dronie during my desert trip with Russell Brown, flying around the Pinnacles outside of Trona:
This year I just returned (hours ago!) from another trip with Russell, this time being joined by his son Davis (who coincidentally is my team’s new UI designer!). On Monday we visited the weird & wonderful International Car Forest of the Last Church, where Davis used his drone plus Metashape to create this 3D model:
I swear to God, stuff like this makes me legitimately feel like I’m having a stroke:
And that example, curiously, seems way more technically & aesthetically sophisticated than the bulk of what I see coming from the “NFT art” world. I really enjoyed this explication of why so much of such content seems like cynical horseshit—sometimes even literally:
As I’ve noted previously, I’m (oddly?) much more bullish on Snap than on Niantic to figure out location-based augmentation of the world. That’s in part because of their very cool world lens tech, which can pair specific experiences with specific spots. It’s cool to see it rolling out more widely:
The first Lens is a new AR experience that takes users through the story of Asian-American businesswoman Lucy Yu, the owner of ‘Yu & Me Books’ in NYC, which is an independent bookshop that’s dedicated to showcasing stories from underrepresented authors.
And for one that’s more widely accessible,
Snap’s also added a new Year of the Tiger Lens, which uses Sky Segmentation technology to add an animated watercolor tiger jumping through the clouds.