I’m incredibly excited to say that my team has just opened a really rare role to design AI-first experiences. From the job listing:
Together, we are working to inspire and empower the next generation of creatives. You will play an integral part, designing and prototyping exciting new product experiences that take full advantage of the latest AI technology from Adobe research. We’ll work iteratively to design, prototype, and test novel creative experiences, develop a deep understanding of user needs and craft new AI-first creative tools that empower users in entirely new and unimagined ways.
Your challenge is to help us pioneer AI-first creation experiences by creating novel experiences that are intuitive, empowering and first of kind.
By necessity that’s a little vague, but trust me, this stuff is wild (check out some of what I’ve been posting in the AI/ML category here), and I need a badass fellow explorer. I really want a partner who’s excited to have a full seat at the table alongside product & eng (i.e. you’re in the opposite of a service relationship where we just chuck things over the wall and say “make this pretty!”), and who’s excited to rapidly visualize a lot of ideas that we’ll test together.
We are at a fascinating inflection point, where computers learn to see more like people & can thus deliver new expressive superpowers. There will be many dead ends & many challenging ethical questions that need your careful consideration—but as Larry Page might say, it’s all “uncomfortably exciting.” 🔥
If you might be the partner we need, please get in touch via the form above, and feel free to share this opportunity with anyone who might be a great fit. Thanks!
This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual.
We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea.
Chrome now prioritizes your active tabs vs. everything that’s open—reducing CPU usage by up to 5x and extending battery life by up to 1.25 hours (based on our internal benchmarks).
You can pin tabs (for those go-to pages), send tabs to your other devices and even group tabs in Chrome. This month we’re adding tab search to the toolbox.
You’ll now be able to see a list of your open tabs—regardless of the window they’re in—then quickly type to find the one you need. It’s search … for your tabs! The feature is coming first to Chromebooks, then to other desktop platforms soon.
Search has rolled out on Chrome OS & is due to come to other platforms soon.
Hovering the camera over the steering wheel will show customers how to use the steering wheel controls or paddle shifters, while pointing at the dashboard will show infotainment functionality.
The app was developed in just three months to roll out on the 2021 Ram TRX. The wild truck will be the first vehicle to use the Know & Go app, and it will be available on other FCA vehicles down the line.
“Diorama will democratize the creation of special effects in the same way the smartphone democratized photography. It will allow anyone to create beautiful visual effects the likes of which have previously only been accessible to Hollywood studios,” said Nat Martin, Founder at Litho in a statement.
When combined with the Litho controller users can animate objects simply by dragging them, fine tuning the path by grabbing specific points. Mood lighting can be added thanks to a selection of filters plus the app supports body tracking so creators can interact with a scene.
Back in the day (like, when Obama was brand new in office), I was intrigued by Microsoft’s dual-screen tablet Courier concept. Check out this preview from 2009:
The device never saw production, and some of the brains behind it went on to launch the lovely Paper drawing app for iPad. Now, however, the company is introducing the Surface Duo, and I think it looks slick:
Fun detail I’d never have guessed in 2009: it runs Android, not Windows!
The prices is high ($1400 and up for something that’s not really a phone or a laptop—though something that could replace both some of the time?), and people are expressing skepticism, but we’ll see how things go. Congrats to the folks who persevered with with that interesting original concept.
I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?
The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.
This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.
[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
Mark Coleran is a mograph O.G., about whose “Fantasy User Interface” (“FUI”) work for movies I used to write about a lot back at Adobe. It was fun listening to him & other designers share a peek into this unique genre of visual storytelling via Adobe’s great Wireframe podcast. I think you’ll enjoy it:
Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]
The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.
Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.
To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”
My teammates have been hard at work to enable not only unlocking your phone using your face, but also using hand gestures to “skip songs, snooze alarms, silence phone calls,” and more. Check out the blog post and the quick demo below:
Check out this funky little donkus, “a small finger-worn controller that connects to your smartphone or headset” to help you point at & control items in the world. It’s more easily demoed than explained:
What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
Here’s a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.
You use the phone’s spatial awareness to ‘pin’ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another… You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.
One’s differing physical abilities shouldn’t stand in the way of drawing & making music. Body-tracking tech from my teammates George & Tyler (see previous) is just one of the new Web-based experiments in Creatability. Check it out:
Creatability is a set of experiments made in collaboration with creators and allies in the accessibility community. They explore how creative tools – drawing, music, and more – can be made more accessible using web and AI technology. They’re just a start. We’re sharing open-source code and tutorials for others to make their own projects.
Robbie has duchenne muscular dystrophy, which has left him able to control only his eyes, head and right thumb joint. […] Bill Weis, a retired tech worker […] set up Robbie’s bed to be controlled by voice activation. While working on the bed, Bill had an epiphany: if he can control the bed this way, why not everything else in Robbie’s bedroom universe?
“Material Theming” effectively fixes a core gripe of the original “Material Design”: that virtually every Android app looks the “same,” or made by Google, which isn’t ideal for brands.
The tool is currently available on Sketch, and you can use it by downloading the “Material” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.
HoloActive Touch appears to float in air, and also provides actual felt, tactile feedback in response to interactions.
As for the tech used to make the interface feel somewhat physical, even though you’re just poking around in mid-air, we’ve heard it might be sourced from Ultrahaptics, a company whose whole mission is to make it possible to feel things including “invisible buttons and dials” when you want them to be tangible, and then not when you don’t.
Now they’re back, showing a slicker but shallower (?) version of the same idea:
Well, we’ll see. Hopefully there’s a lot more to the Adobe tech. Meanwhile, I’m reminded of various VR photo-related demos. After donning a mask & shuffling around a room waving wands in the air like a goof, you realize, “Oh… so I just did the equivalent of zooming in & showing the caption?!”
Who f’ing cares?
You know what would be actually worth a damn? Let me say, “Okay, take all my shots where Henry is making the ‘Henry Face,’ then make an animated face collage made up of those faces—and while you’re at it, P-shop him into a bunch of funny scenes.” Don’t give me a novel but cumbersome rehash, gimme some GD superpowers already.
But hey, they’re making a new Blade Runner, so maybe now Ryan Gosling will edit his pics by voice, and they’ll bring back talking cameras, and in the words of Stephen Colbert, “It’s funny because nothing matters.“
Seriously (unless, of course, the UI demo is just some elaborate trolling). I can’t wait for social media to let you apply a “Facepalm” reaction by literally jamming your phone/palm against your face. Check out the demo & read on for details:
(Of course, in the current political climate I can’t help but think, “Great, I’m glad this is the critically important shit we spend our biggest brains on.”)
Once you’re gone you can never come back… — Neil Young
Luke Wroblewski (designer, writer, & coincidentally my boss) shares a bunch of interesting details on how best to ask users for their permission to access location, etc. (e.g. “The double dialog,” a decoy that gauges whether you’ll say no; if so they try again later)
Territory Studio nailed a tricky middle ground (futuristic but not fanciful) in crafting some great-looking interfaces for The Martian. Take a look:
Working closely with NASA, Territory developed a series of deft and elegant concepts that combine factual integrity and filmic narrative, yet are forward looking and pushing NASA’s current UI conventions as much as possible.
Territory’s plot-based graphics includes identities and visual languages for each set, and include images, text, code, engineering schematics, 3D visualisations based on authentic satellite images showing Martian terrain, weather, and mission equipment served across consoles, navigation and communication systems, laptops, mobiles, tablets, and arm screens throughout.
In all Territory delivered around 400 screens for on-set playback, most of them featuring interactive elements. With 85 screens on the NASA Mission Control set alone, a number of which were 6mx18m wall screens, there are many moments in which the graphics become a dynamic bridge between Earth and Mars, narrative and action, audience and characters.
Super fun, pre-animated, sometimes looping, customizable Fake User Interface assets, as editable After Effects comps. Just drag and drop to quickly create and customize FUI layouts to suit your projects.
It could be cool, but I find myself getting old & jaded. The Leap Motion sensor has yet to take off, and I’m reminded of Logitech’s NuLOOQ Navigator. It was announced some 9 years ago, drove Adobe tools in similar ways, and failed to find traction in the market (though it’s evidently been superseded by the SpacePilot Pro).
Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), I’m delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kost’s blog:
Cmd-comma hides/shows the currently selected layer(s)
Cmd-opt-comma shows all layers
Cmd-slash locks/unlocks the currently selected layer(s)
Cmd-opt-slash unlocks all layers
(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]
If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:
FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.
Tethr bills itself as “The last UI kit you’ll ever need” and “The Most Beautiful iOS Design Kit Ever Made.” I’ll leave that judgement to you, but at a glance it looks like some nicely assembled PSD templates.
You don’t actually need Photoshop to leverage these templates, either: Adobe’s Web-based Project Parfait can extract content “as 8-bit PNG, 32-bit PNG, JPG, and SVG images.”
Hmm—I’m not sold (at all) on the discoverability of this thing, but I remain deeply eager to see someone break open the staid, hoary world of in-car electronics. (The hyped Sync system in our new Fusion is capable but byzantine & laggy. What’s waiting a second+ after button pushes between friends—besides roughly 100 feet traveled at speed?) What do you think?
Transylvanian non(?)-vampire Sorin Neica has created the “Keyboard-S,” an enormous (yet thin) keyboard designed to drive Photoshop & potentially other apps. It’s sort of a Configurator panel that’s sprung right off your screen:
I have a hard time imagining it taking off, and funding on Kickstarter is pretty anemic to date, but I found the idea interesting enough to share. [Via Gary Greenwald]
According to the team, the new Stand In will let you:
Share your prototypes with teammates and clients. Let them experience your designs on their devices instead of scrolling through PDFs on their computers.
Design and use your prototype in real time. As you make changes in Photoshop, Stand In sends the changes to the fully functional prototype.
Move past boring static screens. Add buttons with press states, content that scrolls, modals, and more!
Bring your prototype to life with screen transitions and animation. Stop telling people how the app is supposed to work. Start showing them.
The tool costs $25/mo. & requires a Mac running Photoshop CC.
“Much more than image extraction,” writes Photoshop’s Tim Riot, “Stand In takes positioning, styling, state, even motion data, from PSDs and creates prototypes that feel like real apps which you can view on your iPhone. This capability, to fluidly create in Photoshop and seamlessly output designs to any context, is at the heart of the Generator technology.”
People loved the photo backup/sharing startup Everpix, but it keeled over after netting just ~6,000 paying customers. (That’s hardly surprising in a world where backup & sharing come free with every phone.) It started to popularize a neat feature called Flashback, one that showed photos from your archive taken exactly one year ago.
Now I’ve found Timehop, a free iOS app that finds the images you shared across various social networks, then gives you snapshots from one, two, and more years ago. The daily push notification it sends provides a little treat I’ve come to anticipate.
What sets the app apart, though, is the delight its creators take in otherwise-mundane UI details. The spinning loading indicator is a Back To The Future-style flux capacitor:
(In the app itself it animates.) They’ve also enjoyed making their mascot Abe paw at the pull-to-refresh indicator, seen here captured by Beautiful Pixels:
Well played, guys. Can’t wait to see what you cook up next.
“People don’t come to us because they want 1-inch drills,” the CEO of Black & Decker is said to have remarked, “They come to us because they want 1-inch holes.”
The beautifully executed app Tastemade (App Store) represents an interesting evolution in creative software. Instead of offering an open-ended toolset for doing any number of projects, it aims to do just one thing well—namely, produce short, highly watchable person-on-the-street reviews of restaurants. The entire interface is built to walk you through making & sharing exactly one kind of content. Through constraint + automation, it tends to quickly produce a very nice “hole” (example).
The app is full of nice design touches. For example:
Based on its knowledge of your location & Foursquare data, the app can guess which restaurant you’re visiting, auto-populate the title field, then choose an appropriate font/music combo (which you can then change).
You’re prompted to capture a number of shots, and a colored progress indicator helps ensure you shoot enough but not too much.
When you go to choose a color look, your existing clips are played back at 2x speed, making it easier to see the impact of the filter on more footage.
One of the clips you shoot of the venue is placed behind the title & blurred.
Now, is this particular problem worth solving (i.e. do a lot of people want to record, share, and watch restaurant reviews)? I have no idea. (I’m not allowed out of the house; thanks, kids.) I think, however, that the radically reduced barriers to building & distributing software will keep reshaping the creative-tool landscape, producing more highly focused apps that nicely address one specific need.