TBH I still kinda want one. đ
Category Archives: User Interface
2023 Design Trends
âI strongly believe that animation skills are going to be the next big thing in UI design,” says designer Michal Malewicz. Check out his full set of predictions for the year ahead:
Stabile Diffusion + ArtBreeder = creative composition
We are teetering on the cusp of a Cambrian explosion in UI creativity, with hundreds of developers competing to put amazing controls atop a phalanx of ever-improving generative models. These next couple of months & years are gonna be wiiiiiiild.
TouchType
This tool looks rather nifty, though I haven’t yet had a chance to try it via iOS. Here’s a quick demo:
Are you a badass Web developer? Join us!
My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.
———-
Key Responsibilities:
- Implement the features and user interfaces of our AI-driven product
- Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
- Architect efficient and reusable front-end systems that drive complex web/mobile applications
Must Have:
- BS/MS in Computer Science or a related technical field
- Expert level of experience with JavaScript/TypeScript and frameworks such as Web Components, LitElement, ReactJS, Redux, RxJs, Materialize, jQuery, NodeJS.
- Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
- Strong experience working with build tools such as Rush, Webpack, npm
- Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
- Familiar with scripting languages, such as Python
- Ability to take a project from scoping requirements through actual launch of the project.
- Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.

Folks with speech impairments get an assist from Google
Man, I love stuff like Project Relate, and it’s fun to see some of my old teammates featured here. This is the stuff that’s really worth a damn, IMHO.
Come help me design The Future!
I’m incredibly excited to say that my team has just opened a really rare role to design AI-first experiences. From the job listing:
Together, we are working to inspire and empower the next generation of creatives. You will play an integral part, designing and prototyping exciting new product experiences that take full advantage of the latest AI technology from Adobe research. We’ll work iteratively to design, prototype, and test novel creative experiences, develop a deep understanding of user needs and craft new AI-first creative tools that empower users in entirely new and unimagined ways.
Your challenge is to help us pioneer AI-first creation experiences by creating novel experiences that are intuitive, empowering and first of kind.
By necessity that’s a little vague, but trust me, this stuff is wild (check out some of what I’ve been posting in the AI/ML category here), and I need a badass fellow explorer. I really want a partner who’s excited to have a full seat at the table alongside product & eng (i.e. you’re in the opposite of a service relationship where we just chuck things over the wall and say “make this pretty!”), and who’s excited to rapidly visualize a lot of ideas that we’ll test together.
We are at a fascinating inflection point, where computers learn to see more like people & can thus deliver new expressive superpowers. There will be many dead ends & many challenging ethical questions that need your careful considerationâbut as Larry Page might say, it’s all “uncomfortably exciting.” 🔥
If you might be the partner we need, please get in touch via the form above, and feel free to share this opportunity with anyone who might be a great fit. Thanks!
Google taps (heh) Project Jacquard to improve accessibility
It’s always cool to see people using tech to help make the world more accessible to everyone:
This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual.Â
We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea.Â
UI Faces enables easy avatar insertion
As I obviously have synthetic faces on my mind, here’s a rather cool tool for finding diverse images of people and adding them to design layouts:
UI Faces aggregates thousands of avatars which you can carefully filter to create your perfect personas or just generate random avatars.
Each avatar is tagged with age, gender, emotion and hair color using the Microsoftâs Face API, providing easier filtration and sorting.
Here’s how it integrates into Adobe XD:
Introducing “Gmail Motion”
Hehâthis joke UI reminds me of South Park’s Segway parody, “The Entity,” but would probably be a bit less invasive to use. 😌
“Monster Mash” promises super fast 3D character rigging & animation
David Salesin led Adobe Research for the better part of a decade, and now that he’s at Google, he & others have been collaborating with university researchers to enable fast, fun character animation:
Chrome is reducing memory usage, adding tab search, and more
As always I’m low-key embarrassed to find this stuff exciting, but ÂŻ\_(ă)_/ÂŻ The team writes,
Chrome now prioritizes your active tabs vs. everything thatâs openâreducing CPU usage by up to 5x and extending battery life by up to 1.25 hours (based on our internal benchmarks).
Plus:
You can pin tabs (for those go-to pages), send tabs to your other devices and even group tabs in Chrome. This month we’re adding tab search to the toolbox.
Youâll now be able to see a list of your open tabsâregardless of the window theyâre inâthen quickly type to find the one you need. Itâs search ⌠for your tabs! The feature is coming first to Chromebooks, then to other desktop platforms soon.
Search has rolled out on Chrome OS & is due to come to other platforms soon.

Oh, and the “omnibox” (URL/search/dessert topping/floor wax) is learning to do new things you type in. Initial actions:
- Clear Browsing Data – type âdelete historyâ, âclear cache â or âwipe cookiesâ
- Manage Payment Methods – type âedit credit cardâ or âupdate card infoâ
- Open Incognito Window – type âlaunch incognito modeâ or âincognitoâ
- Manage Passwords – type âedit passwordsâ or âupdate credentialsâ
- Update Chrome – type âupdate browserâ or âupdate google chromeâ
- Translate Page – type â translate thisâ or â translate this pageâ
AR comes to driving nav in the new Escalade
Downside: You’re sticking it to the earth to the tune of 12mpg.
Upside: U-turn arrows are neatly curved!
Swapping out the traditional display area & presenting a camera feedâwhich can evidently feature night vision as wellâis a clever alternative to projecting a washed-out HUD onto the windshield.
AR soups up the humble car manual
The new Ram comes with an augmented reality feature for exploring one’s 700hp whip:
Hovering the camera over the steering wheel will show customers how to use the steering wheel controls or paddle shifters, while pointing at the dashboard will show infotainment functionality.
The app was developed in just three months to roll out on the 2021 Ram TRX. The wild truck will be the first vehicle to use the Know & Go app, and it will be available on other FCA vehicles down the line.

Give FX the finger in AR
The free new app Diorama pairs with the $99 finger-worn Litho device to let you create AR movies directly inside your phone, using a selection of props & tapping into the Google Poly library:
VR Focus writes,
âDiorama will democratize the creation of special effects in the same way the smartphone democratized photography. It will allow anyone to create beautiful visual effects the likes of which have previously only been accessible to Hollywood studios,â said Nat Martin, Founder at Litho in a statement.
When combined with the Litho controller users can animate objects simply by dragging them, fine tuning the path by grabbing specific points. Mood lighting can be added thanks to a selection of filters plus the app supports body tracking so creators can interact with a scene.

Chrome gets tab groups
“Can I get that icon in cornflower blue…?”
Being a middle-aged man getting excited about tab management in a Web browser makes me a little queasyâbut hey, I live in this stuff all day, so 🎉.
You can now group tabs in Chrome:

You can collapse the tab groups, and you can make the titles small:
My pro tip is that you can use an emoji as a group name such as ❤️ for inspiration or 📖 for articles to read.

Hey, find joy where you can, amirite? 😌
After 10+ years of teasing, Microsoft’s dual-screen device arrives
Back in the day (like, when Obama was brand new in office), I was intrigued by Microsoft’s dual-screen tablet Courier concept. Check out this preview from 2009:
The device never saw production, and some of the brains behind it went on to launch the lovely Paper drawing app for iPad. Now, however, the company is introducing the Surface Duo, and I think it looks slick:
Fun detail I’d never have guessed in 2009: it runs Android, not Windows!
The prices is high ($1400 and up for something that’s not really a phone or a laptopâthough something that could replace both some of the time?), and people are expressing skepticism, but we’ll see how things go. Congrats to the folks who persevered with with that interesting original concept.
Action Blocks make tasks more accessible for those with cognitive impairments
I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?
Check out the gesture-sensing holographic Looking Glass
This little dude looks nifty as heck:
The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.
This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional contentâwhether thatâs a 3D animation, DICOM medical imaging data, or a Unity project â in super-stereoscopic 3D, in the real world without any VR or AR headgear.
[Vimeo]
Control a Mac via head movements
Crafty Rube Goldberg-ing for social good (making tech more accessible):
Control your Mac using head movements. Rotate your head to move the cursor and make facial expressions to click, drag, and scroll. Powered by your iPhoneâs TrueDepth camera.
[YouTube]
AuraRing, a trippy ring + wristband combo gesture system
HmmâAR glasses + smart watch (or FitBit) + ring? 🧐
VentureBeat writes,
[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
Google & Adobe team up on XD -> Flutter
I love seeing mom & dad getting along 😌, especially in a notoriously hard-to-solve area where I spent years trying to improve Photoshop & other tools:
Flutter is Googleâs UI toolkit for developers to create native applications for mobile, web, and desktop, all from a single codebase. [âŚ]
XD to Flutter simplifies the designer-to-developer workflow for teams that build with Flutter; it removes guesswork and discrepancies between a user experience design and the final software product.
The plugin generates Dart code for design elements in XD that can be placed directly into your applicationâs codebase.
You can sign up for early access to the plug-in here.
Loupedeck Creative Tool looks slick
The $549 price tag is no joke, but for serious creators I can imagine this little guy being a delight to use:
[YouTube]
Podcast: Fun with FUIs
Mark Coleran is a mograph O.G., about whose “Fantasy User Interface” (“FUI”) work for movies I used to write about a lot back at Adobe. It was fun listening to him & other designers share a peek into this unique genre of visual storytelling via Adobe’s great Wireframe podcast. I think you’ll enjoy it:
Oculus Quest adds hand tracking
(Tangential reminder: You can build hand tracking into your mobile app right now using tech from my team.)
[YouTube]
AR: Adobe & MIT team up on body tracking to power presentations
Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. [âŚ]
The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.
Will presenters go for it? Will students find it valuable? I have no ideaâbut props to anyone willing to push some boundaries.
Don’t nag your family. Make Google do it.
Iâve gotta give this new capability a shot:
To assign a reminder, ask your Assistant, âHey Google, remind Greg to take out the trash at 8pm.â Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that itâs on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders youâve assigned to Greg, simply by saying, âHey Google, what are my reminders for Greg?â
Sneak peek: Gesture & facial recognition on Pixel 4
A “wearable keyboard”?
Sadly (lulz-wise) it’s not an Apple “Nimitz” keyboard superglued to someone’s chest, but this little dingus is… interesting?
The world is your oyster! Er, we mean keyboard pic.twitter.com/LOrGQCAJyo
— Mashable (@mashable) July 10, 2019
AR UX: Litho wearable pointer
Check out this funky little donkus, “a small finger-worn controller that connects to your smartphone or headset” to help you point at & control items in the world. It’s more easily demoed than explained:
[Vimeo]
Google has built… Lego-scanning radar?
No, for real. The Verge writes,
What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
But yes⌠Legos. See what you can make of this:
[YouTube]
AR: A virtual desktop on your actual desktop?
Hereâs a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.
You use the phoneâs spatial awareness to âpinâ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another⌠You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.
[Via]
New open-source Google AI experiments help people make art
Oneâs differing physical abilities shouldnât stand in the way of drawing & making music. Body-tracking tech from my teammates George & Tyler (see previous) is just one of the new Web-based experiments in Creatability. Check it out:
Creatability is a set of experiments made in collaboration with creators and allies in the accessibility community. They explore how creative tools â drawing, music, and more â can be made more accessible using web and AI technology. Theyâre just a start. Weâre sharing open-source code and tutorials for others to make their own projects.
[YouTube]
Power to the people: Voice UI helps a kid find autonomy
This is pretty rad:
Robbie has duchenne muscular dystrophy, which has left him able to control only his eyes, head and right thumb joint. [âŚ] Bill Weis, a retired tech worker [âŚ] set up Robbieâs bed to be controlled by voice activation. While working on the bed, Bill had an epiphany: if he can control the bed this way, why not everything else in Robbieâs bedroom universe?
Check out the story of tech + kindness + grit:
[YouTube]
Google’s new Sketch plug-in helps you pair harmonious colors & fonts
Old Man Nack wouldâve killed for this back in his designer days:
As Design Taxi writes,
“Material Theming” effectively fixes a core gripe of the original âMaterial Designâ: that virtually every Android app looks the âsame,â or made by Google, which isnât ideal for brands.
The tool is currently available on Sketch, and you can use it by downloading the âMaterial” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.
[YouTube]
UX: Spend four minutes learning to build better with Material Design
First, hereâs a 1-minute intro:
And hereâs the 4-minute overview of the new Material.io site, full of templates, design studies, and more:
UX: BMW shows off their HoloActive Touch
Help me, Bavarian Motor Works, you’re my only hope…
TechCrunch writes,
HoloActive Touch appears to float in air, and also provides actual felt, tactile feedback in response to interactions.
As for the tech used to make the interface feel somewhat physical, even though youâre just poking around in mid-air, weâve heard it might be sourced from Ultrahaptics, a company whose whole mission is to make it possible to feel things including âinvisible buttons and dialsâ when you want them to be tangible, and then not when you donât.
[YouTube]
Voice-driven photo editing: Here we go again
Four years ago Adobe showed off a prototype of voice-driven photo editing:
Now theyâre back, showing a slicker but shallower (?) version of the same idea:
Well, weâll see. Hopefully thereâs a lot more to the Adobe tech. Meanwhile, Iâm reminded of various VR photo-related demos. After donning a mask & shuffling around a room waving wands in the air like a goof, you realize, âOh⌠so I just did the equivalent of zooming in & showing the caption?!”
Who fâing cares?
You know what would be actually worth a damn? Let me say, âOkay, take all my shots where Henry is making the âHenry Face,â then make an animated face collage made up of those facesâand while youâre at it, P-shop him into a bunch of funny scenes.â Donât give me a novel but cumbersome rehash, gimme some GD superpowers already.
But hey, theyâre making a new Blade Runner, so maybe now Ryan Gosling will edit his pics by voice, and they’ll bring back talking cameras, and in the words of Stephen Colbert, âItâs funny because nothing matters.“
[YouTube]
Google desktop radar can identify materials, body parts
Seriously (unless, of course, the UI demo is just some elaborate trolling). I canât wait for social media to let you apply a âFacepalmâ reaction by literally jamming your phone/palm against your face. Check out the demo & read on for details:
(Of course, in the current political climate I canât help but think, âGreat, Iâm glad this is the critically important shit we spend our biggest brains on.â)
VR demo: Using gestures for control
Hereâs a rather nifty use of a phoneâs back-facing camera to enable gesture-based control in a Google Cardboard-style VR rig:
My friend Andy notes, “It’s fun to think of a future of people waving their fingers around in public with no externally visible context… WE’LL ALL BE WIZARDS!”
TechCrunch offers a handful of additional details.
[YouTube]
UI design: How to ask for permissions
Once you’re gone you can never come back⌠â Neil Young
Luke Wroblewski (designer, writer, & coincidentally my boss) shares a bunch of interesting details on how best to ask users for their permission to access location, etc. (e.g. âThe double dialog,â a decoy that gauges whether youâll say no; if so they try again later)
â¨[YouTube]
UI: The beautiful interfaces of The Martian
Territory Studio nailed a tricky middle ground (futuristic but not fanciful) in crafting some great-looking interfaces for The Martian. Take a look:
â¨
Working closely with NASA, Territory developed a series of deft and elegant concepts that combine factual integrity and filmic narrative, yet are forward looking and pushing NASAâs current UI conventions as much as possible.
Territoryâs plot-based graphics includes identities and visual languages for each set, and include images, text, code, engineering schematics, 3D visualisations based on authentic satellite images showing Martian terrain, weather, and mission equipment served across consoles, navigation and communication systems, laptops, mobiles, tablets, and arm screens throughout.
In all Territory delivered around 400 screens for on-set playback, most of them featuring interactive elements. With 85 screens on the NASA Mission Control set alone, a number of which were 6mx18m wall screens, there are many moments in which the graphics become a dynamic bridge between Earth and Mars, narrative and action, audience and characters.
Super fun FUI (fake UI) Toys
I’m loving Peter Quinn’s bouncing, tongue-in-cheek interface elements:
â¨Super fun, pre-animated, sometimes looping, customizable Fake User Interface assets, as editable After Effects comps. Just drag and drop to quickly create and customize FUI layouts to suit your projects.
[Vimeo] [Via Justin Maxwell]
Microsoft introduces HoloLens
Please tell me this is emitted by an R2 unit. The concept:
And here’s the current hardware in action:
For yet more check out the Vergeâs up-close report.
[YouTube]
Flow: A precise, gestural, haptic UI
What do you think of this thing?Â
It could be cool, but I find myself getting old & jaded. The Leap Motion sensor has yet to take off, and Iâm reminded of Logitechâs NuLOOQ Navigator. It was announced some 9 years ago, drove Adobe tools in similar ways, and failed to find traction in the market (though itâs evidently been superseded by the SpacePilot Pro).
But hey, who knows?
New Photoshop shortcuts for layer visibility, locking
Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), Iâm delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kostâs blog:
- Cmd-comma hides/shows the currently selected layer(s)
- Cmd-opt-comma shows all layers
- Cmd-slash locks/unlocks the currently selected layer(s)
- Cmd-opt-slash unlocks all layers
(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]
FingerSense expands a device’s touch vocabulary
If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:
FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.
New iOS design resources
- Tethr bills itself as “The last UI kit you’ll ever needâ and “The Most Beautiful iOS Design Kit Ever Made.â Iâll leave that judgement to you, but at a glance it looks like some nicely assembled PSD templates.
- You donât actually need Photoshop to leverage these templates, either: Adobeâs Web-based Project Parfait can extract content “as 8-bit PNG, 32-bit PNG, JPG, and SVG images.”
A multitouch Pizza Hut table, for God’s sake
I really love the part where it helps 780 million people find the clean water they need. (Er, wait…)
Concept: A multitouch car UI
HmmâI’m not sold (at all) on the discoverability of this thing, but I remain deeply eager to see someone break open the staid, hoary world of in-car electronics. (The hyped Sync system in our new Fusion is capable but byzantine & laggy. What’s waiting a second+ after button pushes between friendsâbesides roughly 100 feet traveled at speed?) What do you think?
[YouTube] [Via Christian Cantrell]