TBH I still kinda want one. 🙂
“I strongly believe that animation skills are going to be the next big thing in UI design,” says designer Michal Malewicz. Check out his full set of predictions for the year ahead:
We are teetering on the cusp of a Cambrian explosion in UI creativity, with hundreds of developers competing to put amazing controls atop a phalanx of ever-improving generative models. These next couple of months & years are gonna be wiiiiiiild.
This tool looks rather nifty, though I haven’t yet had a chance to try it via iOS. Here’s a quick demo:
My team is working to build some seriously exciting, AI-driven experiences & deliver them via the Web. We’re looking for a really savvy, energetic partner who can help us explore and ship novel Web-based interfaces that reach millions of people. If that sounds like you or someone you know, please read on.
- Implement the features and user interfaces of our AI-driven product
- Work closely with UX designers, Product managers, Machine Learning scientists, and ML engineers to develop dynamic and compelling UI experiences.
- Architect efficient and reusable front-end systems that drive complex web/mobile applications
- BS/MS in Computer Science or a related technical field
- Expert level experience with HTML, CSS, experience, including concepts like asynchronous programming, closures, types
- Strong experience working with build tools such as Rush, Webpack, npm
- Strong experience in basic cross browser support, caching and optimization techniques for faster page load times, browser APIs and optimizing front end performance
- Familiar with scripting languages, such as Python
- Ability to take a project from scoping requirements through actual launch of the project.
- Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.
Man, I love stuff like Project Relate, and it’s fun to see some of my old teammates featured here. This is the stuff that’s really worth a damn, IMHO.
I’m incredibly excited to say that my team has just opened a really rare role to design AI-first experiences. From the job listing:
Together, we are working to inspire and empower the next generation of creatives. You will play an integral part, designing and prototyping exciting new product experiences that take full advantage of the latest AI technology from Adobe research. We’ll work iteratively to design, prototype, and test novel creative experiences, develop a deep understanding of user needs and craft new AI-first creative tools that empower users in entirely new and unimagined ways.
Your challenge is to help us pioneer AI-first creation experiences by creating novel experiences that are intuitive, empowering and first of kind.
By necessity that’s a little vague, but trust me, this stuff is wild (check out some of what I’ve been posting in the AI/ML category here), and I need a badass fellow explorer. I really want a partner who’s excited to have a full seat at the table alongside product & eng (i.e. you’re in the opposite of a service relationship where we just chuck things over the wall and say “make this pretty!”), and who’s excited to rapidly visualize a lot of ideas that we’ll test together.
We are at a fascinating inflection point, where computers learn to see more like people & can thus deliver new expressive superpowers. There will be many dead ends & many challenging ethical questions that need your careful consideration—but as Larry Page might say, it’s all “uncomfortably exciting.” 🔥
If you might be the partner we need, please get in touch via the form above, and feel free to share this opportunity with anyone who might be a great fit. Thanks!
It’s always cool to see people using tech to help make the world more accessible to everyone:
This research inspired us to use Jacquard technology to create a soft, interactive patch or sleeve that allows people to access digital, health and security services with simple gestures. This woven technology can be worn or positioned on a variety of surfaces and locations, adjusting to the needs of each individual.
We teamed up with Garrison Redd, a Para powerlifter and advocate in the disability community, to test this new idea.
As I obviously have synthetic faces on my mind, here’s a rather cool tool for finding diverse images of people and adding them to design layouts:
UI Faces aggregates thousands of avatars which you can carefully filter to create your perfect personas or just generate random avatars.
Each avatar is tagged with age, gender, emotion and hair color using the Microsoft’s Face API, providing easier filtration and sorting.
Here’s how it integrates into Adobe XD:
Heh—this joke UI reminds me of South Park’s Segway parody, “The Entity,” but would probably be a bit less invasive to use. 😌
David Salesin led Adobe Research for the better part of a decade, and now that he’s at Google, he & others have been collaborating with university researchers to enable fast, fun character animation:
Chrome now prioritizes your active tabs vs. everything that’s open—reducing CPU usage by up to 5x and extending battery life by up to 1.25 hours (based on our internal benchmarks).
You’ll now be able to see a list of your open tabs—regardless of the window they’re in—then quickly type to find the one you need. It’s search … for your tabs! The feature is coming first to Chromebooks, then to other desktop platforms soon.
Search has rolled out on Chrome OS & is due to come to other platforms soon.
Oh, and the “omnibox” (URL/search/dessert topping/floor wax) is learning to do new things you type in. Initial actions:
- Clear Browsing Data – type ‘delete history’, ‘clear cache ‘ or ‘wipe cookies’
- Manage Payment Methods – type ‘edit credit card’ or ‘update card info’
- Open Incognito Window – type ‘launch incognito mode‘ or ‘incognito’
- Manage Passwords – type ‘edit passwords’ or ‘update credentials’
- Update Chrome – type ‘update browser’ or ‘update google chrome’
- Translate Page – type ‘ translate this’ or ‘ translate this page’
Downside: You’re sticking it to the earth to the tune of 12mpg.
Upside: U-turn arrows are neatly curved!
Swapping out the traditional display area & presenting a camera feed—which can evidently feature night vision as well—is a clever alternative to projecting a washed-out HUD onto the windshield.
The new Ram comes with an augmented reality feature for exploring one’s 700hp whip:
Hovering the camera over the steering wheel will show customers how to use the steering wheel controls or paddle shifters, while pointing at the dashboard will show infotainment functionality.
The app was developed in just three months to roll out on the 2021 Ram TRX. The wild truck will be the first vehicle to use the Know & Go app, and it will be available on other FCA vehicles down the line.
The free new app Diorama pairs with the $99 finger-worn Litho device to let you create AR movies directly inside your phone, using a selection of props & tapping into the Google Poly library:
VR Focus writes,
“Diorama will democratize the creation of special effects in the same way the smartphone democratized photography. It will allow anyone to create beautiful visual effects the likes of which have previously only been accessible to Hollywood studios,” said Nat Martin, Founder at Litho in a statement.
When combined with the Litho controller users can animate objects simply by dragging them, fine tuning the path by grabbing specific points. Mood lighting can be added thanks to a selection of filters plus the app supports body tracking so creators can interact with a scene.
“Can I get that icon in cornflower blue…?”
Being a middle-aged man getting excited about tab management in a Web browser makes me a little queasy—but hey, I live in this stuff all day, so 🎉.
You can now group tabs in Chrome:
You can collapse the tab groups, and you can make the titles small:
My pro tip is that you can use an emoji as a group name such as ❤️ for inspiration or 📖 for articles to read.
Hey, find joy where you can, amirite? 😌
Back in the day (like, when Obama was brand new in office), I was intrigued by Microsoft’s dual-screen tablet Courier concept. Check out this preview from 2009:
The device never saw production, and some of the brains behind it went on to launch the lovely Paper drawing app for iPad. Now, however, the company is introducing the Surface Duo, and I think it looks slick:
Fun detail I’d never have guessed in 2009: it runs Android, not Windows!
The prices is high ($1400 and up for something that’s not really a phone or a laptop—though something that could replace both some of the time?), and people are expressing skepticism, but we’ll see how things go. Congrats to the folks who persevered with with that interesting original concept.
I’ve long joked-not-joked that I want better parental controls on devices, not so that I can control my kids but so that I can help my parents. How great would it be to be able to configure something like this, then push it to the devices of those who need it (parents, kids, etc.)?
This little dude looks nifty as heck:
The Looking Glass is powered by our proprietary 45-element light field technology, generating 45 distinct and simultaneous perspectives of three-dimensional content of any sort.
This means multiple people around a Looking Glass are shown different perspectives of that three-dimensional content—whether that’s a 3D animation, DICOM medical imaging data, or a Unity project – in super-stereoscopic 3D, in the real world without any VR or AR headgear.
Crafty Rube Goldberg-ing for social good (making tech more accessible):
Control your Mac using head movements. Rotate your head to move the cursor and make facial expressions to click, drag, and scroll. Powered by your iPhone’s TrueDepth camera.
Hmm—AR glasses + smart watch (or FitBit) + ring? 🧐
[A] finger could be used to write legibly in the air without a touch surface, as well as providing input taps, flick gestures, and potentially pinches that could control a screened device from afar. Thanks to the magnetic sensing implementation, researchers suggest that even a visually obscured finger could be used to send text messages, interact with device UIs, and play games. Moreover, AuraRing has been designed to work on multiple finger and hand sizes.
I love seeing mom & dad getting along 😌, especially in a notoriously hard-to-solve area where I spent years trying to improve Photoshop & other tools:
Flutter is Google’s UI toolkit for developers to create native applications for mobile, web, and desktop, all from a single codebase. […]
XD to Flutter simplifies the designer-to-developer workflow for teams that build with Flutter; it removes guesswork and discrepancies between a user experience design and the final software product.
The plugin generates Dart code for design elements in XD that can be placed directly into your application’s codebase.
Mark Coleran is a mograph O.G., about whose “Fantasy User Interface” (“FUI”) work for movies I used to write about a lot back at Adobe. It was fun listening to him & other designers share a peek into this unique genre of visual storytelling via Adobe’s great Wireframe podcast. I think you’ll enjoy it:
Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements. […]
The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements.
Will presenters go for it? Will students find it valuable? I have no idea—but props to anyone willing to push some boundaries.
I’ve gotta give this new capability a shot:
To assign a reminder, ask your Assistant, “Hey Google, remind Greg to take out the trash at 8pm.” Greg will get a notification on both his Assistant-enabled Smart Display, speaker and phone when the reminder is created, so that it’s on his radar. Greg will get notified again at the exact time you asked your Assistant to remind him. You can even quickly see which reminders you’ve assigned to Greg, simply by saying, “Hey Google, what are my reminders for Greg?”
No, for real. The Verge writes,
What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
But yes… Legos. See what you can make of this:
Here’s a pretty darn clever idea for navigating among apps by treating your phone as a magic window into physical space.
You use the phone’s spatial awareness to ‘pin’ applications in a certain point in space, much like placing your notebook in one corner of your desk, and your calendar at another… You can create a literal landscape of apps that you can switch between by simply switching the location of your phone.
One’s differing physical abilities shouldn’t stand in the way of drawing & making music. Body-tracking tech from my teammates George & Tyler (see previous) is just one of the new Web-based experiments in Creatability. Check it out:
Creatability is a set of experiments made in collaboration with creators and allies in the accessibility community. They explore how creative tools – drawing, music, and more – can be made more accessible using web and AI technology. They’re just a start. We’re sharing open-source code and tutorials for others to make their own projects.
This is pretty rad:
Robbie has duchenne muscular dystrophy, which has left him able to control only his eyes, head and right thumb joint. […] Bill Weis, a retired tech worker […] set up Robbie’s bed to be controlled by voice activation. While working on the bed, Bill had an epiphany: if he can control the bed this way, why not everything else in Robbie’s bedroom universe?
Check out the story of tech + kindness + grit:
Old Man Nack would’ve killed for this back in his designer days:
As Design Taxi writes,
“Material Theming” effectively fixes a core gripe of the original “Material Design”: that virtually every Android app looks the “same,” or made by Google, which isn’t ideal for brands.
The tool is currently available on Sketch, and you can use it by downloading the “Material” plugin on the app. Google aims to expand the system regularly, and will roll out new options such as animations, depth controls, and textures, next.
Help me, Bavarian Motor Works, you’re my only hope…
HoloActive Touch appears to float in air, and also provides actual felt, tactile feedback in response to interactions.
As for the tech used to make the interface feel somewhat physical, even though you’re just poking around in mid-air, we’ve heard it might be sourced from Ultrahaptics, a company whose whole mission is to make it possible to feel things including “invisible buttons and dials” when you want them to be tangible, and then not when you don’t.
Now they’re back, showing a slicker but shallower (?) version of the same idea:
Well, we’ll see. Hopefully there’s a lot more to the Adobe tech. Meanwhile, I’m reminded of various VR photo-related demos. After donning a mask & shuffling around a room waving wands in the air like a goof, you realize, “Oh… so I just did the equivalent of zooming in & showing the caption?!”
Who f’ing cares?
You know what would be actually worth a damn? Let me say, “Okay, take all my shots where Henry is making the ‘Henry Face,’ then make an animated face collage made up of those faces—and while you’re at it, P-shop him into a bunch of funny scenes.” Don’t give me a novel but cumbersome rehash, gimme some GD superpowers already.
But hey, they’re making a new Blade Runner, so maybe now Ryan Gosling will edit his pics by voice, and they’ll bring back talking cameras, and in the words of Stephen Colbert, “It’s funny because nothing matters.“
Seriously (unless, of course, the UI demo is just some elaborate trolling). I can’t wait for social media to let you apply a “Facepalm” reaction by literally jamming your phone/palm against your face. Check out the demo & read on for details:
(Of course, in the current political climate I can’t help but think, “Great, I’m glad this is the critically important shit we spend our biggest brains on.”)
Here’s a rather nifty use of a phone’s back-facing camera to enable gesture-based control in a Google Cardboard-style VR rig:
My friend Andy notes, “It’s fun to think of a future of people waving their fingers around in public with no externally visible context… WE’LL ALL BE WIZARDS!”
TechCrunch offers a handful of additional details.
Once you’re gone you can never come back… — Neil Young
Luke Wroblewski (designer, writer, & coincidentally my boss) shares a bunch of interesting details on how best to ask users for their permission to access location, etc. (e.g. “The double dialog,” a decoy that gauges whether you’ll say no; if so they try again later)
Territory Studio nailed a tricky middle ground (futuristic but not fanciful) in crafting some great-looking interfaces for The Martian. Take a look:
Working closely with NASA, Territory developed a series of deft and elegant concepts that combine factual integrity and filmic narrative, yet are forward looking and pushing NASA’s current UI conventions as much as possible.
Territory’s plot-based graphics includes identities and visual languages for each set, and include images, text, code, engineering schematics, 3D visualisations based on authentic satellite images showing Martian terrain, weather, and mission equipment served across consoles, navigation and communication systems, laptops, mobiles, tablets, and arm screens throughout.
In all Territory delivered around 400 screens for on-set playback, most of them featuring interactive elements. With 85 screens on the NASA Mission Control set alone, a number of which were 6mx18m wall screens, there are many moments in which the graphics become a dynamic bridge between Earth and Mars, narrative and action, audience and characters.
I’m loving Peter Quinn’s bouncing, tongue-in-cheek interface elements:
Super fun, pre-animated, sometimes looping, customizable Fake User Interface assets, as editable After Effects comps. Just drag and drop to quickly create and customize FUI layouts to suit your projects.
[Vimeo] [Via Justin Maxwell]
What do you think of this thing?
It could be cool, but I find myself getting old & jaded. The Leap Motion sensor has yet to take off, and I’m reminded of Logitech’s NuLOOQ Navigator. It was announced some 9 years ago, drove Adobe tools in similar ways, and failed to find traction in the market (though it’s evidently been superseded by the SpacePilot Pro).
But hey, who knows?
Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), I’m delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kost’s blog:
- Cmd-comma hides/shows the currently selected layer(s)
- Cmd-opt-comma shows all layers
- Cmd-slash locks/unlocks the currently selected layer(s)
- Cmd-opt-slash unlocks all layers
(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]
If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:
FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.
- Tethr bills itself as “The last UI kit you’ll ever need” and “The Most Beautiful iOS Design Kit Ever Made.” I’ll leave that judgement to you, but at a glance it looks like some nicely assembled PSD templates.
- You don’t actually need Photoshop to leverage these templates, either: Adobe’s Web-based Project Parfait can extract content “as 8-bit PNG, 32-bit PNG, JPG, and SVG images.”
Hmm—I’m not sold (at all) on the discoverability of this thing, but I remain deeply eager to see someone break open the staid, hoary world of in-car electronics. (The hyped Sync system in our new Fusion is capable but byzantine & laggy. What’s waiting a second+ after button pushes between friends—besides roughly 100 feet traveled at speed?) What do you think?
[YouTube] [Via Christian Cantrell]