Here’s a rather nifty use of a phone’s back-facing camera to enable gesture-based control in a Google Cardboard-style VR rig:
My friend Andy notes, “It’s fun to think of a future of people waving their fingers around in public with no externally visible context… WE’LL ALL BE WIZARDS!”
Once you’re gone you can never come back… — Neil Young
Luke Wroblewski (designer, writer, & coincidentally my boss) shares a bunch of interesting details on how best to ask users for their permission to access location, etc. (e.g. “The double dialog,” a decoy that gauges whether you’ll say no; if so they try again later)
Territory Studio nailed a tricky middle ground (futuristic but not fanciful) in crafting some great-looking interfaces for The Martian. Take a look:
Working closely with NASA, Territory developed a series of deft and elegant concepts that combine factual integrity and filmic narrative, yet are forward looking and pushing NASA’s current UI conventions as much as possible.
Territory’s plot-based graphics includes identities and visual languages for each set, and include images, text, code, engineering schematics, 3D visualisations based on authentic satellite images showing Martian terrain, weather, and mission equipment served across consoles, navigation and communication systems, laptops, mobiles, tablets, and arm screens throughout.
In all Territory delivered around 400 screens for on-set playback, most of them featuring interactive elements. With 85 screens on the NASA Mission Control set alone, a number of which were 6mx18m wall screens, there are many moments in which the graphics become a dynamic bridge between Earth and Mars, narrative and action, audience and characters.
Super fun, pre-animated, sometimes looping, customizable Fake User Interface assets, as editable After Effects comps. Just drag and drop to quickly create and customize FUI layouts to suit your projects.
It could be cool, but I find myself getting old & jaded. The Leap Motion sensor has yet to take off, and I’m reminded of Logitech’s NuLOOQ Navigator. It was announced some 9 years ago, drove Adobe tools in similar ways, and failed to find traction in the market (though it’s evidently been superseded by the SpacePilot Pro).
Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), I’m delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kost’s blog:
Cmd-comma hides/shows the currently selected layer(s)
Cmd-opt-comma shows all layers
Cmd-slash locks/unlocks the currently selected layer(s)
Cmd-opt-slash unlocks all layers
(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]
If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:
FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.
Tethr bills itself as “The last UI kit you’ll ever need” and “The Most Beautiful iOS Design Kit Ever Made.” I’ll leave that judgement to you, but at a glance it looks like some nicely assembled PSD templates.
You don’t actually need Photoshop to leverage these templates, either: Adobe’s Web-based Project Parfait can extract content “as 8-bit PNG, 32-bit PNG, JPG, and SVG images.”
Hmm—I’m not sold (at all) on the discoverability of this thing, but I remain deeply eager to see someone break open the staid, hoary world of in-car electronics. (The hyped Sync system in our new Fusion is capable but byzantine & laggy. What’s waiting a second+ after button pushes between friends—besides roughly 100 feet traveled at speed?) What do you think?
Transylvanian non(?)-vampire Sorin Neica has created the “Keyboard-S,” an enormous (yet thin) keyboard designed to drive Photoshop & potentially other apps. It’s sort of a Configurator panel that’s sprung right off your screen:
I have a hard time imagining it taking off, and funding on Kickstarter is pretty anemic to date, but I found the idea interesting enough to share. [Via Gary Greenwald]
On the 7th anniversary of the iPhone’s introduction, it’s interesting to look back and the ground it broke, the origins of some of its innovations, and more: http://vimeo.com/81745843
[YouTube]
According to the team, the new Stand In will let you:
Share your prototypes with teammates and clients. Let them experience your designs on their devices instead of scrolling through PDFs on their computers.
Design and use your prototype in real time. As you make changes in Photoshop, Stand In sends the changes to the fully functional prototype.
Move past boring static screens. Add buttons with press states, content that scrolls, modals, and more!
Bring your prototype to life with screen transitions and animation. Stop telling people how the app is supposed to work. Start showing them.
The tool costs $25/mo. & requires a Mac running Photoshop CC.
“Much more than image extraction,” writes Photoshop’s Tim Riot, “Stand In takes positioning, styling, state, even motion data, from PSDs and creates prototypes that feel like real apps which you can view on your iPhone. This capability, to fluidly create in Photoshop and seamlessly output designs to any context, is at the heart of the Generator technology.”
People loved the photo backup/sharing startup Everpix, but it keeled over after netting just ~6,000 paying customers. (That’s hardly surprising in a world where backup & sharing come free with every phone.) It started to popularize a neat feature called Flashback, one that showed photos from your archive taken exactly one year ago.
Now I’ve found Timehop, a free iOS app that finds the images you shared across various social networks, then gives you snapshots from one, two, and more years ago. The daily push notification it sends provides a little treat I’ve come to anticipate.
What sets the app apart, though, is the delight its creators take in otherwise-mundane UI details. The spinning loading indicator is a Back To The Future-style flux capacitor:
(In the app itself it animates.) They’ve also enjoyed making their mascot Abe paw at the pull-to-refresh indicator, seen here captured by Beautiful Pixels:
Well played, guys. Can’t wait to see what you cook up next.
“People don’t come to us because they want 1-inch drills,” the CEO of Black & Decker is said to have remarked, “They come to us because they want 1-inch holes.”
The beautifully executed app Tastemade (App Store) represents an interesting evolution in creative software. Instead of offering an open-ended toolset for doing any number of projects, it aims to do just one thing well—namely, produce short, highly watchable person-on-the-street reviews of restaurants. The entire interface is built to walk you through making & sharing exactly one kind of content. Through constraint + automation, it tends to quickly produce a very nice “hole” (example).
The app is full of nice design touches. For example:
Based on its knowledge of your location & Foursquare data, the app can guess which restaurant you’re visiting, auto-populate the title field, then choose an appropriate font/music combo (which you can then change).
You’re prompted to capture a number of shots, and a colored progress indicator helps ensure you shoot enough but not too much.
When you go to choose a color look, your existing clips are played back at 2x speed, making it easier to see the impact of the filter on more footage.
One of the clips you shoot of the venue is placed behind the title & blurred.
Now, is this particular problem worth solving (i.e. do a lot of people want to record, share, and watch restaurant reviews)? I have no idea. (I’m not allowed out of the house; thanks, kids.) I think, however, that the radically reduced barriers to building & distributing software will keep reshaping the creative-tool landscape, producing more highly focused apps that nicely address one specific need.
inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. inFORM is a step toward our vision of Radical Atoms.
Lean startup methodology strongly emphasizes paper prototypes: What’s the simplest, fastest, lowest-cost thing you could do to increase learning & decrease risk? To that end, AppSeed aims to let you sketch on paper, then turn the results into functioning, HTML-based app prototypes:
Interestingly, it ties into Photoshop:
Test your design on the phone and edit it in Photoshop through PS Connection. This creates a Photoshop document that has all your drawn elements on their own layers, giving you the pixel perfect control to move your design into the next stages of production.
The first fruits of independent developers extending Photoshop’s new Generator feature are starting to arrive.
“Much more than image extraction,” writes Photoshop’s Tim Riot, “Stand In takes positioning, styling, state, even motion data, from PSDs and creates prototypes that feel like real apps which you can view on your iPhone. This capability, to fluidly create in Photoshop and seamlessly output designs to any context, is at the heart of the Generator technology.”
“Design with Layer Comps… Link screens by naming layers… Open in Interactive Mode.” Sounds promising.
Composite is a brand new way of creating interactive prototypes. It automatically connects to your Photoshop® documents and converts your mockups into interactive prototypes in seconds. No need to export images or maintain tons of hotspots.
While designing in Photoshop® you can also get a live preview of your design directly on your device, ensuring the design works in the right scale and context.
What if Dyson made video games? As Engadget writes, “The idea is to give touchless experiences like motion control a form of physical interaction, offering the end user a more natural response through, well, touch.”
Touché is a funky interface project from Disney Research, turning everything from liquids (!) to door knobs into multitouch surfaces:
According to the project site, the technology “can not only detect a touch event, but simultaneously recognize complex configurations of the human hands and body during touch interaction.”
We added complex touch and gesture sensitivity not only to computing devices and everyday objects, but also to the human body and liquids. Importantly, instrumenting objects and material with touch sensitivity is easy and straightforward: a single wire is sufficient to make objects and environments touch and gesture sensitive.
WiSee is the first wireless system that can identify gestures in line-of-sight, non-line-of-sight, and through-the-wall scenarios. Unlike other gesture recognition systems like Kinect, Leap Motion or MYO, WiSee requires neither an infrastructure of cameras nor user instrumentation of devices. We implement a proof-of-concept prototype of WiSee and evaluate it in both an office environment and a two-bedroom apartment. Our results show that WiSee can identify and classify a set of nine gestures with an average accuracy of 94%.
C’mon, haven’t you always wanted to use rock fingers to control your stereo?
I’m kinda skeptical about the MYO armband getting widespread, but the video does suggest a series of fun mishaps (chicken-slicing gone wrong; army robot flailing; and “You have died of dysentery”-style messages you read while expiring after a ski crash). But hey, prove me wrong.
John Gruber once wrote, “In hindsight, I think the use cases for the original iPad are simplicity and delight.” Haze for iPhone nails that mission for weather:
“Is it going to be warmer tomorrow? Don’t read it. See it. The beautifully animated background shows you the trend. Use Haze frequently to unlock colorful themes and customize the look.”
The UI rewards exploration with lots of polished details, and the use of theme unlocking is an interesting way to encourage active use.
The one downside I’ve detected thus far is that the reliance on taps & gestures rather than on traditional buttons & labels leaves some functionality obscure. I feel dumb for not having discovered one of the most basic operations (tapping the central readout circle) on my own. (I hadn’t seen the video before downloading the app.) Even so, the app’s easy to navigate & a joy to use.
Oh, and if you like this sort of thing, check out Summly for news. It crashes too much & the summaries aren’t always great, but it’s lovely enough to explore that I stick with it.
Check out the multitouch music-making interface for Samplr:
I imagine myself trying to compose some Christmas music using this app, then having to quote Norm MacDonald: “Happy birthday, Jesus–hope you like crap!” [Via James Roche]
What do you think? It’s great-looking, but I remain a bit skeptical about using touchscreens (which obviously lack the physical variation of a keyboard or dedicated hardware controller) in this way. If you’re a Photoshop user with an iPad, are you using Adobe Nav–and if not, why not? I suspect the problem is that one has to keep glancing over at a touch screen, whereas one can navigate a keyboard (or physical jog wheel, etc.) simply by feel. Yet the concept remains alluring, so I’m curious about others’ assessment.
[Via James Cox]
“It’s the most fun you can have with lasers without a cat,” they say. Hmm—be that as it may, I have a hard time imagining people shelling out $179 and then using this thing comfortably. Still clever, though.
“Dear society: You got used to seeing people talk into space & learned to figure ‘Bluetooth, not schizophrenia.’ Now let’s see you get used to dead-eyed zombies fidgeting with the air to turn virtual dials as they walk. [Here’s more info.]” —Love, the tech industry
I kinda want to get one of these into—or rather, next to—Russell Brown’s hands.
KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications.
“When we started this,” says creator Jiawen Chen, “we were thinking of using it as a more effective way of doing set dressing and prop placement in movies for a preview. Studios have large collections of shapes, and it’s pretty tedious to move them into place exactly. We wanted to be able to quickly walk around and grab things and twist them around. Then we realized we can do many more fun things.” I’ll bet.
Pretty darn cool, though if that Kinect dodgeball demo isn’t Centrifugal Bumble-Puppy come to life, I don’t know what is.
Here’s more info on using a Kinect as a 3D scanner:
You know this is coming. You know it’ll be almost impossible to resist.
“The more we use knowledge found on the Internet (and not in our own minds) the less capacity we have to actually hold that knowledge internally.” Seems about right. [Via]
“What if materials could defy gravity, so that we could leave them suspended in mid-air?” ask the creators of ZeroN. “ZeroN is a physical and digital interaction element that floats and moves in space by computer-controlled magnetic levitation.” One could ask questions about precision and practicality, but… holy crap, levitating balls as UI!
Hey, it’s the return of my (not at all) beloved Nintendo Power Glove!
Cynical take: “Oh, you were bitching that UIs requiring you to lift your hands & touch a screen would make you tired? Wait’ll you have to hold up an iPad in one hand just so you can re-create Lawnmower Man! You’ll be built like Jeff Fahey in no time, tuffy!”
Actual take: Cool!
In high school I had my first long-distance girlfriend. My dad would roll his eyes at our pre-Net attempts to connect. “Oh, you’re probably eating a cheese sandwich as 6pm, because Jeanne said she’d eat a cheese sandwich at 6pm…” He was kidding (and wrong), but there’s much to be said for synchronicity across space.
Enter Marco Triverio’s concept “Feel Me.” As Fast Company puts it,
When a friend is typing, you can see where they’re touching on your own screen. And when your fingers match up, from halfway across the world, haptic feedback can allow you to serendipitously touch. In a text-me-later culture, Feel Me enables communication that’s transient and visceral.
I think it’s rather brilliant. And as for Jeanne, sometimes I now see her across space, hobnobbing with Mitt Romney. Funny old world.
If this thing ($70?!) works even remotely as advertised, we’re in for an exciting future:
[Reader Pierre-Etienne Courtejoie quips, “I just shudder about the possible single-finger gestures to force quit software.” (Hmm, seems very John Gruber-positive.)]
Could people wrap their heads around the idea enough to use it productively? In my experience many people still struggle with things like symbols & Smart Objects–if they even use them at all. [Via Mausoom Sarkar]
Hats off to the guys at Teehan+Lax for serving the design/Photoshop community with this great app creation resource. “It’s based on iOS 5.1,” they write, “and includes hundreds of Retina assets available natively on the platform.”
Because Photoshop CS6 is such a big step forward for interface designers, the new file requires use of the CS6 beta:
This time around we executed the file in Adobe’s latest release, Photoshop CS6 (currently still in beta). It’s a free download right now and, in my humble opinion, one of the best releases of Photoshop to date. Its perfect pixel snapping, grouped layer styles and a few other features enabled us to create the assets with more accuracy, yet remain remarkably editable. We highly recommend it, not just so you can use this file, but so that you support great software releases like this.
Sometimes the best things are the smallest. I’m so weirdly proud of the layer searching shortcuts in PS CS6.
You can hit Cmd-Opt-Shift-F to put focus on the Layers panel’s new search field. Start typing and Photoshop will start displaying only the layers whose names match.
Hitting the same command highlights the text in the field, letting you start typing again to filter with a new string.
Hitting Delete clears the field, making Layers display all layers again.
Hitting Return/Enter will put keyboard focus back onto PS proper (consistent with how other text fields work in the app). Esc does the same but also cancels whatever change you just made.
Note that clearing the field isn’t the same as toggling filtering on/off with the little red switch to the right. Why? Because toggling the switch is non-destructive: You can set up filtering criteria (e.g. show me all text & adjustment layers), then quickly enable/disable filtering; you don’t have to keep setting up the parameters.
A big deal, used by tons & tons of people? Maybe not. But to me it speaks volumes about quality and craftsmanship, and God help me, I live for this stuff.
Here Grant Friedman of PSDTUTS quickly demos the basics: