Category Archives: User Interface

Reinterpreting classic instrument clusters in the age of CarPlay

“Tell me about a product you hate that you use regularly.” I asked this question of hundreds of Google PM candidates I interviewed, and it was always a great bozo detector. Most people don’t have much of an answer—no real passion or perspective. I want to know not just what sucks, but why it sucks.

If I were asked the same question, I’d immediately say “Every car infotainment system ever made.” As Tolstoy might say, “Each one is unhappy in its own way.” The most interesting thing, I think, isn’t just to talk about the crappy mismatched & competing experiences, but rather about why every system I’ve ever used sucks. The answer can’t be “Every person at every company is a moron”—so what is it?

So much comes down to the structure of the industry, with hardware & software being made by a mishmash of corporate frenemies, all contending with a soup of regulations, risk aversion (one recall can destroy the profitability of a whole product line), and surprisingly bargain-bin electronics.

Despite all that, talented folks continue to fight the good fight, and I enjoyed John LePore’s speculative designs that reinterpret the instrument clusters of classic cars (from Corvettes to DeLoreans) through Apple’s latest CarPlay framework:

Making today’s AI interfaces “look completely absurd”

Time is a flat circle…

Daring Fireball’s Mac 40th anniversary post contained a couple of quotes that made me think about the current state of interaction with AI tools, particularly around imaging. First, there’s this line from Steven Levy’s review of the original Mac:

[W]hat you might expect to see is some sort of opaque code, called a “prompt,” consisting of phosphorescent green or white letters on a murky background.

Think about how revolutionarily different & better (DOS-head haters’ gripes notwithstanding) this was.

What you see with Macintosh is the Finder. On a pleasant, light background, little pictures called “icons” appear, representing choices available to you.

And then there’s this kicker:

“When you show Mac to an absolute novice,” says Chris Espinosa, the twenty-two-year-old head of publications for the Mac team, “he assumes that’s the way all computers work. That’s our highest achievement. We’ve made almost every computer that’s ever been made look completely absurd.

I don’t know quite what will make today’s prompt-heavy approach to generation feel equivalently quaint, but think how far we’ve come in less than two years since DALL•E’s public debut—from swapping long, arcane codes to having more conversational, iterative creation flows (esp. via ChatGPT) and creating through direct, realtime UIs like those offered via Krea & Leonardo. Throw in a dash of spatial computing, perhaps via “glasses that look like glasses,” and who knows where we’ll be!

But it sure as heck won’t mainly be knowing “some sort of opaque code, called a ‘prompt.'”

Stabile Diffusion + ArtBreeder = creative composition

We are teetering on the cusp of a Cambrian explosion in UI creativity, with hundreds of developers competing to put amazing controls atop a phalanx of ever-improving generative models. These next couple of months & years are gonna be wiiiiiiild.

Are you a badass Web developer? Join us!

My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.

———-

Key Responsibilities:

  • Implement the features and user interfaces of our AI-driven product
  • Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
  • Architect efficient and reusable front-end systems that drive complex web/mobile applications

Must Have:

  • BS/MS in Computer Science or a related technical field
  • Expert level of experience with JavaScript/TypeScript and frameworks such as Web Components, LitElement, ReactJS, Redux, RxJs, Materialize, jQuery, NodeJS.
  • Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
  • Strong experience working with build tools such as Rush, Webpack, npm
  • Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
  • Familiar with scripting languages, such as Python
  • Ability to take a project from scoping requirements through actual launch of the project.
  • Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.

Come help me design The Future!

I’m incredibly excited to say that my team has just opened a really rare role to design AI-first experiences. From the job listing:

Together, we are working to inspire and empower the next generation of creatives. You will play an integral part, designing and prototyping exciting new product experiences that take full advantage of the latest AI technology from Adobe research. We’ll work iteratively to design, prototype, and test novel creative experiences, develop a deep understanding of user needs and craft new AI-first creative tools that empower users in entirely new and unimagined ways.

Your challenge is to help us pioneer AI-first creation experiences by creating novel experiences that are intuitive, empowering and first of kind.

By necessity that’s a little vague, but trust me, this stuff is wild (check out some of what I’ve been posting in the AI/ML category here), and I need a badass fellow explorer. I really want a partner who’s excited to have a full seat at the table alongside product & eng (i.e. you’re in the opposite of a service relationship where we just chuck things over the wall and say “make this pretty!”), and who’s excited to rapidly visualize a lot of ideas that we’ll test together.

We are at a fascinating inflection point, where computers learn to see more like people & can thus deliver new expressive superpowers. There will be many dead ends & many challenging ethical questions that need your careful consideration—but as Larry Page might say, it’s all “uncomfortably exciting.” 🔥

If you might be the partner we need, please get in touch via the form above, and feel free to share this opportunity with anyone who might be a great fit. Thanks!

Google taps (heh) Project Jacquard to improve accessibility

It’s always cool to see people using tech to help make the world more accessible to everyone:

This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual. 

We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea. 

UI Faces enables easy avatar insertion

As I obviously have synthetic faces on my mind, here’s a rather cool tool for finding diverse images of people and adding them to design layouts:

UI Faces aggregates thousands of avatars which you can carefully filter to create your perfect personas or just generate random avatars.

Each avatar is tagged with age, gender, emotion and hair color using the Microsoft’s Face API, providing easier filtration and sorting.

Here’s how it integrates into Adobe XD:

Chrome is reducing memory usage, adding tab search, and more

As always I’m low-key embarrassed to find this stuff exciting, but ¯\_(ツ)_/¯ The team writes,

Chrome now prioritizes your active tabs vs. everything that’s open—reducing CPU usage by up to 5x and extending battery life by up to 1.25 hours (based on our internal benchmarks).

Plus:

You can pin tabs (for those go-to pages), send tabs to your other devices and even group tabs in Chrome. This month we’re adding tab search to the toolbox.

You’ll now be able to see a list of your open tabs—regardless of the window they’re in—then quickly type to find the one you need. It’s search … for your tabs! The feature is coming first to Chromebooks, then to other desktop platforms soon.

Search has rolled out on Chrome OS & is due to come to other platforms soon.

Oh, and the “omnibox” (URL/search/dessert topping/floor wax) is learning to do new things you type in. Initial actions:

  • Clear Browsing Data – type ‘delete history’, ‘clear cache ‘ or ‘wipe cookies’
  • Manage Payment Methods – type ‘edit credit card’ or ‘update card info’
  • Open Incognito Window – type ‘launch incognito mode‘ or ‘incognito’
  • Manage Passwords – type ‘edit passwords’ or ‘update credentials’
  • Update Chrome – type ‘update browser’ or ‘update google chrome’
  • Translate Page – type ‘ translate this’ or ‘ translate this page’

AR soups up the humble car manual

The new Ram comes with an augmented reality feature for exploring one’s 700hp whip:

Hovering the camera over the steering wheel will show customers how to use the steering wheel controls or paddle shifters, while pointing at the dashboard will show infotainment functionality.

The app was developed in just three months to roll out on the 2021 Ram TRX. The wild truck will be the first vehicle to use the Know & Go app, and it will be available on other FCA vehicles down the line.

Employee-Developed Know & Go Mobile App Debuts on 2021 Ram 1500 TRX

Give FX the finger in AR

The free new app Diorama pairs with the $99 finger-worn Litho device to let you create AR movies directly inside your phone, using a selection of props & tapping into the Google Poly library:

VR Focus writes,

“Diorama will democratize the creation of special effects in the same way the smartphone democratized photography. It will allow anyone to create beautiful visual effects the likes of which have previously only been accessible to Hollywood studios,” said Nat Martin, Founder at Litho in a statement.

When combined with the Litho controller users can animate objects simply by dragging them, fine tuning the path by grabbing specific points. Mood lighting can be added thanks to a selection of filters plus the app supports body tracking so creators can interact with a scene.

Chrome gets tab groups

“Can I get that icon in cornflower blue…?”

Being a middle-aged man getting excited about tab management in a Web browser makes me a little queasy—but hey, I live in this stuff all day, so 🎉.

You can now group tabs in Chrome:

You can collapse the tab groups, and you can make the titles small:

 My pro tip is that you can use an emoji as a group name such as ❤️ for inspiration or 📖 for articles to read.

Hey, find joy where you can, amirite? 😌

After 10+ years of teasing, Microsoft’s dual-screen device arrives

Back in the day (like, when Obama was brand new in office), I was intrigued by Microsoft’s dual-screen tablet Courier concept. Check out this preview from 2009:

The device never saw production, and some of the brains behind it went on to launch the lovely Paper drawing app for iPad. Now, however, the company is introducing the Surface Duo, and I think it looks slick:

Fun detail I’d never have guessed in 2009: it runs Android, not Windows!

The prices is high ($1400 and up for something that’s not really a phone or a laptop—though something that could replace both some of the time?), and people are expressing skepticism, but we’ll see how things go. Congrats to the folks who persevered with with that interesting original concept.

Action Blocks make tasks more accessible for those with cognitive impairments

I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?

Check out the gesture-sensing holographic Looking Glass

This little dude looks nifty as heck:

The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.

This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.

UntitledImage

[Vimeo]

AuraRing, a trippy ring + wristband combo gesture system

Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐

VentureBeat writes,

[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.

[YouTube] [Via]

Google & Adobe team up on XD -> Flutter

I love seeing mom & dad getting along 😌, especially in a notoriously hard-to-solve area where I spent years trying to improve Photoshop & other tools:

Flutter is Google’s UI toolkit for developers to create native applications for mobile, web, and desktop, all from a single codebase. […]

XD to Flutter simplifies the designer-to-developer workflow for teams that build with Flutter; it removes guesswork and discrepancies between a user experience design and the final software product.

The plugin generates Dart code for design elements in XD that can be placed directly into your application’s codebase.

You can sign up for early access to the plug-in here.

NewImage

AR: Adobe & MIT team up on body tracking to power presentations

Fun, funky idea:

Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]

The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.

Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.

Don’t nag your family. Make Google do it.

I’ve gotta give this new capability a shot:

To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”

Google has built… Lego-scanning radar?

No, for real. The Verge writes,

What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.

But yes… Legos. See what you can make of this:

[YouTube]

AR: A virtual desktop on your actual desktop?

Here’s a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.

You use the phone’s spatial awareness to ‘pin’ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another… You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.

NewImage

[Via]

New open-source Google AI experiments help people make art

One’s differing physical abilities shouldn’t stand in the way of drawing & making music. Body-tracking tech from my teammates George & Tyler (see previous) is just one of the new Web-based experiments in Creatability. Check it out:

Creatability is a set of experiments made in collaboration with creators and allies in the accessibility community. They explore how creative tools – drawing, music, and more – can be made more accessible using web and AI technology. They’re just a start. We’re sharing open-source code and tutorials for others to make their own projects.

NewImage

[YouTube]

Power to the people: Voice UI helps a kid find autonomy

This is pretty rad:

Robbie has duchenne muscular dystrophy, which has left him able to control only his eyes, head and right thumb joint. […] Bill Weis, a retired tech worker […] set up Robbie’s bed to be controlled by voice activation. While working on the bed, Bill had an epiphany: if he can control the bed this way, why not everything else in Robbie’s bedroom universe?

Check out the story of tech + kindness + grit:

[YouTube]

Google’s new Sketch plug-in helps you pair harmonious colors & fonts

Old Man Nack would’ve killed for this back in his designer days:

As Design Taxi writes,

“Material Theming” effectively fixes a core gripe of the original “Material Design”: that virtually every Android app looks the “same,” or made by Google, which isn’t ideal for brands.

The tool is currently available on Sketch, and you can use it by downloading the “Material” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.

NewImage

[YouTube]

UX: BMW shows off their HoloActive Touch

Help me, Bavarian Motor Works, you’re my only hope…

TechCrunch writes,

HoloActive Touch appears to float in air, and also provides actual felt, tactile feedback in response to interactions.

As for the tech used to make the interface feel somewhat physical, even though you’re just poking around in mid-air, we’ve heard it might be sourced from Ultrahaptics, a company whose whole mission is to make it possible to feel things including “invisible buttons and dials” when you want them to be tangible, and then not when you don’t.

NewImage

[YouTube]

Voice-driven photo editing: Here we go again

Four years ago Adobe showed off a prototype of voice-driven photo editing:

Now they’re back, showing a slicker but shallower (?) version of the same idea:

Well, we’ll see. Hopefully there’s a lot more to the Adobe tech. Meanwhile, I’m reminded of various VR photo-related demos. After donning a mask & shuffling around a room waving wands in the air like a goof, you realize, “Oh… so I just did the equivalent of zooming in & showing the caption?!”

Who f’ing cares?

You know what would be actually worth a damn? Let me say, “Okay, take all my shots where Henry is making the ‘Henry Face,’ then make an animated face collage made up of those faces—and while you’re at it, P-shop him into a bunch of funny scenes.” Don’t give me a novel but cumbersome rehash, gimme some GD superpowers already.

But hey, they’re making a new Blade Runner, so maybe now Ryan Gosling will edit his pics by voice, and they’ll bring back talking cameras, and in the words of Stephen Colbert, “It’s funny because nothing matters.

NewImage

[YouTube]

Google desktop radar can identify materials, body parts

Seriously (unless, of course, the UI demo is just some elaborate trolling). I can’t wait for social media to let you apply a “Facepalm” reaction by literally jamming your phone/palm against your face. Check out the demo & read on for details:

(Of course, in the current political climate I can’t help but think, “Great, I’m glad this is the critically important shit we spend our biggest brains on.”)

NewImage

NewImage

[YouTube] [Via]

UI: The beautiful interfaces of The Martian

Territory Studio nailed a tricky middle ground (futuristic but not fanciful) in crafting some great-looking interfaces for The Martian. Take a look:

Working closely with NASA, Territory developed a series of deft and elegant concepts that combine factual integrity and filmic narrative, yet are forward looking and pushing NASA’s current UI conventions as much as possible.

Territory’s plot-based graphics includes identities and visual languages for each set, and include images, text, code, engineering schematics, 3D visualisations based on authentic satellite images showing Martian terrain, weather, and mission equipment served across consoles, navigation and communication systems, laptops, mobiles, tablets, and arm screens throughout.

In all Territory delivered around 400 screens for on-set playback, most of them featuring interactive elements. With 85 screens on the NASA Mission Control set alone, a number of which were 6mx18m wall screens, there are many moments in which the graphics become a dynamic bridge between Earth and Mars, narrative and action, audience and characters.

[Vimeo] [Via]

New Photoshop shortcuts for layer visibility, locking

Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), I’m delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kost’s blog:

  • Cmd-comma hides/shows the currently selected layer(s)
  • Cmd-opt-comma shows all layers
  • Cmd-slash locks/unlocks the currently selected layer(s)
  • Cmd-opt-slash unlocks all layers

(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]

FingerSense expands a device’s touch vocabulary

If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:

FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.

[Vimeo] [Via]

New iOS design resources

  • Tethr bills itself as “The last UI kit you’ll ever need” and “The Most Beautiful iOS Design Kit Ever Made.” I’ll leave that judgement to you, but at a glance it looks like some nicely assembled PSD templates.
  • You don’t actually need Photoshop to leverage these templates, either: Adobe’s Web-based Project Parfait can extract content “as 8-bit PNG, 32-bit PNG, JPG, and SVG images.”