Please tell me this is emitted by an R2 unit. The concept:
And here’s the current hardware in action:
For yet more check out the Verge’s up-close report.
[YouTube]
Please tell me this is emitted by an R2 unit. The concept:
And here’s the current hardware in action:
For yet more check out the Verge’s up-close report.
[YouTube]
What do you think of this thing?
It could be cool, but I find myself getting old & jaded. The Leap Motion sensor has yet to take off, and I’m reminded of Logitech’s NuLOOQ Navigator. It was announced some 9 years ago, drove Adobe tools in similar ways, and failed to find traction in the market (though it’s evidently been superseded by the SpacePilot Pro).
But hey, who knows?
Having an excessive interest in keyboard shortcuts (I once wrote an edition of a book dedicate to this subject), I’m delighted to see some welcome tweaks arriving in Photoshop CC. According to Julieanne Kost’s blog:
(On Windows substitute Ctrl-Alt for Cmd-Opt) [Via Jeff Tranberry]
If “Double knuckle knock” becomes more than, I dunno, presumably some gross phrase you’d find on Urban Dictionary, you may thank the folks at Qeexo:
FingerSense is an enhancement to touch interaction that allows conventional screens to know how the finger is being used for input: fingertip, knuckle or nail. Further, our system can add support for a passive stylus with an eraser. The technology is lightweight, low-latency and cost effective.
I really love the part where it helps 780 million people find the clean water they need. (Er, wait…)
Hmm—I’m not sold (at all) on the discoverability of this thing, but I remain deeply eager to see someone break open the staid, hoary world of in-car electronics. (The hyped Sync system in our new Fusion is capable but byzantine & laggy. What’s waiting a second+ after button pushes between friends—besides roughly 100 feet traveled at speed?) What do you think?
[YouTube] [Via Christian Cantrell]
Transylvanian non(?)-vampire Sorin Neica has created the “Keyboard-S,” an enormous (yet thin) keyboard designed to drive Photoshop & potentially other apps. It’s sort of a Configurator panel that’s sprung right off your screen:
I have a hard time imagining it taking off, and funding on Kickstarter is pretty anemic to date, but I found the idea interesting enough to share. [Via Gary Greenwald]
On the 7th anniversary of the iPhone’s introduction, it’s interesting to look back and the ground it broke, the origins of some of its innovations, and more:
http://vimeo.com/81745843
[YouTube]
According to the team, the new Stand In will let you:
- Share your prototypes with teammates and clients. Let them experience your designs on their devices instead of scrolling through PDFs on their computers.
- Design and use your prototype in real time. As you make changes in Photoshop, Stand In sends the changes to the fully functional prototype.
- Move past boring static screens. Add buttons with press states, content that scrolls, modals, and more!
- Bring your prototype to life with screen transitions and animation. Stop telling people how the app is supposed to work. Start showing them.
The tool costs $25/mo. & requires a Mac running Photoshop CC.
“Much more than image extraction,” writes Photoshop’s Tim Riot, “Stand In takes positioning, styling, state, even motion data, from PSDs and creates prototypes that feel like real apps which you can view on your iPhone. This capability, to fluidly create in Photoshop and seamlessly output designs to any context, is at the heart of the Generator technology.”
[Vimeo]
Here’s a great idea, featuring great UX details.
People loved the photo backup/sharing startup Everpix, but it keeled over after netting just ~6,000 paying customers. (That’s hardly surprising in a world where backup & sharing come free with every phone.) It started to popularize a neat feature called Flashback, one that showed photos from your archive taken exactly one year ago.
Now I’ve found Timehop, a free iOS app that finds the images you shared across various social networks, then gives you snapshots from one, two, and more years ago. The daily push notification it sends provides a little treat I’ve come to anticipate.
What sets the app apart, though, is the delight its creators take in otherwise-mundane UI details. The spinning loading indicator is a Back To The Future-style flux capacitor:

(In the app itself it animates.) They’ve also enjoyed making their mascot Abe paw at the pull-to-refresh indicator, seen here captured by Beautiful Pixels:
Well played, guys. Can’t wait to see what you cook up next.
[YouTube]
“People don’t come to us because they want 1-inch drills,” the CEO of Black & Decker is said to have remarked, “They come to us because they want 1-inch holes.”
The beautifully executed app Tastemade (App Store) represents an interesting evolution in creative software. Instead of offering an open-ended toolset for doing any number of projects, it aims to do just one thing well—namely, produce short, highly watchable person-on-the-street reviews of restaurants. The entire interface is built to walk you through making & sharing exactly one kind of content. Through constraint + automation, it tends to quickly produce a very nice “hole” (example).
The app is full of nice design touches. For example:

Now, is this particular problem worth solving (i.e. do a lot of people want to record, share, and watch restaurant reviews)? I have no idea. (I’m not allowed out of the house; thanks, kids.) I think, however, that the radically reduced barriers to building & distributing software will keep reshaping the creative-tool landscape, producing more highly focused apps that nicely address one specific need.
I can’t wait for this to be featured in a super-unsexy remake of Ghost:
MIT is making pure magic:
Check out some below-the-table stills on Colossal.
inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. inFORM can also interact with the physical world around it, for example moving objects on the table’s surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. inFORM is a step toward our vision of Radical Atoms.
[Vimeo]
Lean startup methodology strongly emphasizes paper prototypes: What’s the simplest, fastest, lowest-cost thing you could do to increase learning & decrease risk? To that end, AppSeed aims to let you sketch on paper, then turn the results into functioning, HTML-based app prototypes:
Interestingly, it ties into Photoshop:
Test your design on the phone and edit it in Photoshop through PS Connection. This creates a Photoshop document that has all your drawn elements on their own layers, giving you the pixel perfect control to move your design into the next stages of production.

[Via]
The first fruits of independent developers extending Photoshop’s new Generator feature are starting to arrive.
“Much more than image extraction,” writes Photoshop’s Tim Riot, “Stand In takes positioning, styling, state, even motion data, from PSDs and creates prototypes that feel like real apps which you can view on your iPhone. This capability, to fluidly create in Photoshop and seamlessly output designs to any context, is at the heart of the Generator technology.”
“Design with Layer Comps… Link screens by naming layers… Open in Interactive Mode.” Sounds promising.
Composite is a brand new way of creating interactive prototypes. It automatically connects to your Photoshop® documents and converts your mockups into interactive prototypes in seconds. No need to export images or maintain tons of hotspots.
While designing in Photoshop® you can also get a live preview of your design directly on your device, ensuring the design works in the right scale and context.
[Via Tim Riot]
3Gear Systems is exploring ways to control Photoshop via their gesture-sensing technology:
Here’s a more general demo of their tech:
Touché is a funky interface project from Disney Research, turning everything from liquids (!) to door knobs into multitouch surfaces:
According to the project site, the technology “can not only detect a touch event, but simultaneously recognize complex configurations of the human hands and body during touch interaction.”
We added complex touch and gesture sensitivity not only to computing devices and everyday objects, but also to the human body and liquids. Importantly, instrumenting objects and material with touch sensitivity is easy and straightforward: a single wire is sufficient to make objects and environments touch and gesture sensitive.
What the what?
WiSee is the first wireless system that can identify gestures in line-of-sight, non-line-of-sight, and through-the-wall scenarios. Unlike other gesture recognition systems like Kinect, Leap Motion or MYO, WiSee requires neither an infrastructure of cameras nor user instrumentation of devices. We implement a proof-of-concept prototype of WiSee and evaluate it in both an office environment and a two-bedroom apartment. Our results show that WiSee can identify and classify a set of nine gestures with an average accuracy of 94%.
[Via Bill Roberts]
Interesting. What do you think?
[Via Mark Maguire]
Hmm—I’d imagine this MIT project being less bizarre when applied to a wearable augmented reality interface like Google Glass:
[Via]
Check out this nifty project from Anastasiy Safari (known for his popular color-picking Photoshop extensions). It combines PS with a Leap Motion controller:
C’mon, haven’t you always wanted to use rock fingers to control your stereo?
I’m kinda skeptical about the MYO armband getting widespread, but the video does suggest a series of fun mishaps (chicken-slicing gone wrong; army robot flailing; and “You have died of dysentery”-style messages you read while expiring after a ski crash). But hey, prove me wrong.
John Gruber once wrote, “In hindsight, I think the use cases for the original iPad are simplicity and delight.” Haze for iPhone nails that mission for weather:
“Is it going to be warmer tomorrow? Don’t read it. See it. The beautifully animated background shows you the trend. Use Haze frequently to unlock colorful themes and customize the look.”
The UI rewards exploration with lots of polished details, and the use of theme unlocking is an interesting way to encourage active use.
The one downside I’ve detected thus far is that the reliance on taps & gestures rather than on traditional buttons & labels leaves some functionality obscure. I feel dumb for not having discovered one of the most basic operations (tapping the central readout circle) on my own. (I hadn’t seen the video before downloading the app.) Even so, the app’s easy to navigate & a joy to use.
Oh, and if you like this sort of thing, check out Summly for news. It crashes too much & the summaries aren’t always great, but it’s lovely enough to explore that I stick with it.
What do you think of PixelTone, an experimental interface from Adobe Research & the University of Michigan?
Check out the multitouch music-making interface for Samplr:
I imagine myself trying to compose some Christmas music using this app, then having to quote Norm MacDonald: “Happy birthday, Jesus–hope you like crap!” [Via James Roche]
Check out Jeff Chow’s (now funded) Kickstarter project:
What do you think? It’s great-looking, but I remain a bit skeptical about using touchscreens (which obviously lack the physical variation of a keyboard or dedicated hardware controller) in this way. If you’re a Photoshop user with an iPad, are you using Adobe Nav–and if not, why not? I suspect the problem is that one has to keep glancing over at a touch screen, whereas one can navigate a keyboard (or physical jog wheel, etc.) simply by feel. Yet the concept remains alluring, so I’m curious about others’ assessment.
[Via James Cox]
“It’s the most fun you can have with lasers without a cat,” they say. Hmm—be that as it may, I have a hard time imagining people shelling out $179 and then using this thing comfortably. Still clever, though.
[Via Guy Nicholas]
“Dear society: You got used to seeing people talk into space & learned to figure ‘Bluetooth, not schizophrenia.’ Now let’s see you get used to dead-eyed zombies fidgeting with the air to turn virtual dials as they walk. [Here’s more info.]” —Love, the tech industry
I kinda want to get one of these into—or rather, next to—Russell Brown’s hands.
Fascinating!
KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications.
“When we started this,” says creator Jiawen Chen, “we were thinking of using it as a more effective way of doing set dressing and prop placement in movies for a preview. Studios have large collections of shapes, and it’s pretty tedious to move them into place exactly. We wanted to be able to quickly walk around and grab things and twist them around. Then we realized we can do many more fun things.” I’ll bet.
Pretty darn cool, though if that Kinect dodgeball demo isn’t Centrifugal Bumble-Puppy come to life, I don’t know what is.
Here’s more info on using a Kinect as a 3D scanner:
[Via]
You know this is coming. You know it’ll be almost impossible to resist.
“The more we use knowledge found on the Internet (and not in our own minds) the less capacity we have to actually hold that knowledge internally.” Seems about right. [Via]
Can. Not. Wait.
Check out more info from the MIT Technology Review, and from the product site.
[Via Mausoom Sarkar]
I can’t wait to see what Adobe tools can do with multitouch plus a super high precision stylus that pays attention to pressure, tilt, and rotation.
This bad boy costs $3699 and will be available in early August.
“What if materials could defy gravity, so that we could leave them suspended in mid-air?” ask the creators of ZeroN. “ZeroN is a physical and digital interaction element that floats and moves in space by computer-controlled magnetic levitation.” One could ask questions about precision and practicality, but… holy crap, levitating balls as UI!
[Via]
Hey, it’s the return of my (not at all) beloved Nintendo Power Glove!
Cynical take: “Oh, you were bitching that UIs requiring you to lift your hands & touch a screen would make you tired? Wait’ll you have to hold up an iPad in one hand just so you can re-create Lawnmower Man! You’ll be built like Jeff Fahey in no time, tuffy!”
Actual take: Cool!
Check out the project site for more info.
[Via Dave Simons]
In high school I had my first long-distance girlfriend. My dad would roll his eyes at our pre-Net attempts to connect. “Oh, you’re probably eating a cheese sandwich as 6pm, because Jeanne said she’d eat a cheese sandwich at 6pm…” He was kidding (and wrong), but there’s much to be said for synchronicity across space.
Enter Marco Triverio’s concept “Feel Me.” As Fast Company puts it,
When a friend is typing, you can see where they’re touching on your own screen. And when your fingers match up, from halfway across the world, haptic feedback can allow you to serendipitously touch. In a text-me-later culture, Feel Me enables communication that’s transient and visceral.
I think it’s rather brilliant. And as for Jeanne, sometimes I now see her across space, hobnobbing with Mitt Romney. Funny old world.
If this thing ($70?!) works even remotely as advertised, we’re in for an exciting future:
[Reader Pierre-Etienne Courtejoie quips, “I just shudder about the possible single-finger gestures to force quit software.” (Hmm, seems very John Gruber-positive.)]
What do you think of this cleverness?
Could people wrap their heads around the idea enough to use it productively? In my experience many people still struggle with things like symbols & Smart Objects–if they even use them at all. [Via Mausoom Sarkar]
Just think of the horrors Andrew Lloyd Webber could wreak with this thing (ideally with a Beowulf cluster of them).
[Via]
Hats off to the guys at Teehan+Lax for serving the design/Photoshop community with this great app creation resource. “It’s based on iOS 5.1,” they write, “and includes hundreds of Retina assets available natively on the platform.”
Because Photoshop CS6 is such a big step forward for interface designers, the new file requires use of the CS6 beta:
This time around we executed the file in Adobe’s latest release, Photoshop CS6 (currently still in beta). It’s a free download right now and, in my humble opinion, one of the best releases of Photoshop to date. Its perfect pixel snapping, grouped layer styles and a few other features enabled us to create the assets with more accuracy, yet remain remarkably editable. We highly recommend it, not just so you can use this file, but so that you support great software releases like this.
Check out the iPad GUI PSD (Retina Display) at Teehan+Lax.
Sometimes the best things are the smallest. I’m so weirdly proud of the layer searching shortcuts in PS CS6.
Note that clearing the field isn’t the same as toggling filtering on/off with the little red switch to the right. Why? Because toggling the switch is non-destructive: You can set up filtering criteria (e.g. show me all text & adjustment layers), then quickly enable/disable filtering; you don’t have to keep setting up the parameters.
A big deal, used by tons & tons of people? Maybe not. But to me it speaks volumes about quality and craftsmanship, and God help me, I live for this stuff.
Here Grant Friedman of PSDTUTS quickly demos the basics:
My question: if a Wookiee pushes you, do the stars turn into a bunch of lines?
[Via]
Ehhh, what? But yes, it’s apparently real. Read more here.
[Via]
Of potential interest to Web/screen designers:
Interface designer Neven Mrgan made a good point on Friday:
Touch gestures are the new keyboard shortcuts, but the difference is delight: no one ever saw ⌘⇧S and thought “awesome!”
To which I say: Well, no normal person, maybe. 🙂
I remember learning Photoshop and discovering that holding Option would turn Cancel buttons into Reset buttons. “They didn’t have to do that,” I thought–delighted. Later when I learned After Effects, the teacher showed that Shift-dragging did one thing while Option-dragging did another. I asked how one would do both things at once, and though he didn’t know, when I combined the modifiers, sure enough, it worked as I wanted. “My people, my people…” I thought. And just the other day, I took enormous pride in persuading the Photoshop team to get the semantics of a new shortcut just exactly right.
It’s craftsmanship that counts*, and delight flows from the feeling of speed, power, and control. Whatever the surface, let my fingers–and my brain–fly.
J.
* As Photoshop godfather Mark Hamburg observed, “People pay for features because it’s easier to justify the expense. People adore polish because it makes the product feel good, and that adoration will carry you farther in the long run than features.”
Jeff Han & Perceptive Pixel blew everyone’s minds with their multitouch demo a year before the iPhone debuted. Six (!) years later he demonstrates their 82″ (!!) multitouch display featuring pen input:
Here’s hoping we see more pen-enabled goodness & lay that “If you see a stylus, it means they blew it” dogma to rest. [Via]
I want Robert Shaw from Jaws to describe my morning as he would a shark attack: “Up comes a reminder on the iPad and the Netflix stops streamin’, and then… ah then you hear that terrible high-pitched screamin’…” Yeah, it got ugly. (Sorry, other conference call participants.)
Good news, though: You can now go into Settings->Notifications, find the Calendar app, and set the notification type from Alert (which interrupts the video) to Banner. Now our guys can watch their morning Mighty Machines without going ballistic when it pauses.
On the downside, here’s an intriguing little bit of usability research: Finn is often generating four-finger “swipes” (new in iOS 5 for switching apps) when simply trying to drag on the screen. While coloring in lines in the aforementioned Harold, he’d push hard and his little knuckles would register as multitouch swipes. Thus he’d start switching apps, bringing up the list of apps, etc. Who knew?
As always, I pine for Apple to introduce multi-user support in iOS. Now in the kids’ profile I’ll add “disabling global swipe gestures” to “making it harder to exit the app via the Home button,” “disallow scary stuff on YouTube,” etc.
Update: Double who-knew: BubCap home button covers “are just rigid enough to keep toddlers from pressing the home button, yet flexible enough that adults can activate the button with a firm push.” [Via Iván Cavero Belaunde]