At nearly twice the price while lacking features like Bullet Time, the Insta360 One RS 1″ had better produce way better photos and videos than what come out of my trusty One X2. I therefore really appreciate this detailed side-by-side comparison. Having used both together, I don’t see a dramatic difference, but this vid certainly makes a good case that the gains are appreciable.
As a key member and thought-leader on this team, you’ll be an integral part of exploring and influencing the next generation of Adobe’s creative tools. Together, we are forging a new class of experience standards for desktop, mobile, and web products for years to come.
You will (among other things):
- Seek/design the “simple” experiences and interactions that influence our growing portfolio of creative tools. Empower users to delight themselves.
- Partner closely with fellow senior designers, product managers, senior engineers, and other leaders across different teams to bring new products to life.
What you’ll bring to the team
- A minimum of 5 years of industry experience in product design with a proven track record of success
- Experience (and a love of!) solving complex design and technology problems using systems thinking
- Excellent communication skills, with the ability to clearly articulate a multi-level problem space and strategy behind design decisions
- Creative and analytical skills to advocate for and support research, synthesize, and communicate insights that encourage design opportunities and product strategy
- Passion for understanding how creative people do what they do and how technology plays a role in the creative process
- Experience establishing user experience patterns across mobile, web, and desktop products within connected platforms
[W]e’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.
Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.
GANs (generative adversarial networks), like what underpins Smart Portrait in Photoshop, promise all kind of fine-grained image synthesis and editing. Check out new advances around one’s ‘do:
[Via Davis Brown]
AI animation tech, which in this case leverages the motion of a face in a video to animate a different face in a still image, keeps getting better & better. Check out these results from Samsung Labs:
My video teammates (including the one I married) are attempting some groundbreaking, audacious stuff, and this newly open gig is a great chance to dive in with them:
This role will lead projects across various workflows from editing, audio, color, graphics and motion. You will bring your future-looking ideas to life with your designs, prototypes, and storytelling toolkit. You are a strong advocate for the customer because you can relate to their needs and understand the power of design and story to transform.
You will use your experience in post-production and experience design to paint the future vision of the products, but also love getting down into the details of shipping new builds and setting the example for shipping work with high quality.
I am, evidently, the kind of guy at parties who’ll whip out a typographical comedy clip—but hey, this one is worth it! Enjoy, fellow nerds:
The replies led me to discover Emdash.fan, an entire site devoted to providing you, the gentle visitor, with exactly one (1) delicious em dash for your clipboard. Insane & thus amazing. 😌
On a related note:
“Oh my God… Who is this Dave Werner guy and what kind of government lab built him?” So I raved back in 2006 (!). Dave was only a student back then, but his potential was obvious, and I’m so happy he reached out in 2012 and became the designer on my then-team. Now 10 years later, he reflects on his journey in this characteristically charming, inventive little video:
When it rains, it pours: Text2LIVE promises the ability to use descriptions to modify parts of photos:
[O]ur goal is to edit the appearance of existing objects (e.g., object’s texture) or augment the scene with new visual effects (e.g., smoke, fire) in a semantically meaningful manner.
It also works on video:
This new tech from
The team writes,
We found that the image generated from both text and sketch was almost always (99.54 percent of the time) rated as better aligned with the original sketch. It was often (66.3 percent of the time) more aligned with the text prompt too. This demonstrates that Make-A-Scene generations are indeed faithful to a person’s vision communicated via the sketch.
Nicely done; can’t wait to see more experiences like this.