I don’t need this on my desk. I don’t need this on my desk. I don’t need this on my desk.
I kinda need this on my desk. 😌
Hexbot is an all-in-one desktop robotic arm with drawing, writing, laser engraving, and 3D printing to make users bring their ideas to life easily! Up to 0.05 mm high precision, noiseless design, easy-to-use software, and more than 6 functional modules.
“Oh, that’s easy,” said my wife on our first date, answering my question about what kind of car she’d be: “I’d be one of those little three-wheeled French jobs like Audrey Hepburn drove in Funny Face.” Ever since then we’ve had a thing for three-wheelers, putting one on our save-the-date wedding card.
It’s hard to imagine the electric Nobe car really hitting the highways, but I love the look of it:
Congrats to Paul Debevec, Xueming Yu Wan-Chun Alex Ma, and their former colleague Timothy Hawkins for the recognition of their groundbreaking Light Stage work!
Now they’re working with my extended team:
“We try to bring our knowledge and background to try to make better Google products,” Ma says. “We’re working on improving the realism of VR and AR experiences.”
I go full SNL Sue thinking about what might be possible.
Oh, and they worked on Ready Player One (nominated for Best Visual Effects this year) and won for Blade Runner 2049 last year:
Just prior to heading to Google, they worked on “Blade Runner 2049,” which took home the Oscar for Best Visual Effects last year and brought back the character Rachael from the original “Blade Runner” movie. The new Rachael was constructed with facial features from the original actress, Sean Young, and another actress, Loren Peta, to make the character appear to be the same age she was in the first film.
We’ve been working on Project Stream, a technical test to solve some of the biggest challenges of streaming. For this test, we’re going to push the limits with one of the most demanding applications for streaming—a blockbuster video game.
We’ve partnered with one of the most innovative and successful video game publishers, Ubisoft, to stream their soon-to-be released Assassin’s Creed Odyssey® to your Chrome browser on a laptop or desktop. Starting on October 5, a limited number of participants will get to play the latest in this best-selling franchise at no charge for the duration of the Project Stream test.
The key component to Oppo’s system is a periscope setup inside the phone: light comes in through one lens, gets reflected by a mirror into an array of additional lenses, and then arrives at the image sensor, which sits perpendicular to the body of the phone. That’s responsible for the telephoto lens in Oppo’s array, which has a 35mm equivalence of 160mm. Between that lens, a regular wide-angle lens, and a superwide-angle that’s 16mm-equivalent, you get the full 10x range that Oppo promises.
Ladies & gentlemen, we are approaching Peak JNack…
Using 400,000 LEGO® bricks, two experienced LEGO® model makers have built what is probably the world’s biggest camper from LEGO® bricks. The full-size T2 was revealed at the f.re.e leisure and travel fair in Munich. Visitors young and old to f.re.e (20 – 24 February) will be able to admire the 700 kg Bulli up close. The vehicle that served as the blueprint for the model was the T2a camper van, built from 1967 to 1971 – to this day the truly iconic camper for globetrotters.
Bruce Berry (not Neil Young’s late roadie) created some beautiful time lapse imagery from images captured aboard the International Space Station:
On Vimeo he writes,
All footage has been edited, color graded, denoised, deflickered, stabilized by myself. Some of the 4K video clips were shot at 24frames/sec reflecting the actual speed of the space station over the earth. Shots taken at wider angels were speed up a bit to match the flow of the video.
Some interesting facts about the ISS: The ISS maintains an orbit above the earth with an altitude of between 330 and 435 km (205 and 270 miles). The ISS completes 15.54 orbits per day around the earth and travels at a speed of 27,600 km/h; 17,100 mph).
The yellow line that you see over the earth is Airgolw/Nightglow. Airglow/Nightglow is a layer of nighttime light emissions caused by chemical reactions high in Earth’s atmosphere. A variety of reactions involving oxygen, sodium, ozone, and nitrogen result in the production of a very faint amount of light (Keck A and Miller S et al. 2013).
I love the choice of music & wondered whether it comes from Dunkirk. Close: that somewhat anxious tock-tock undertone is indeed a Hans Zimmer jam, but from 20 years earlier (The Thin Red Line).
A number of our partner teams have been working on both the foundation for browser-based ML & on cool models that can run there efficiently:
We are excited to announce the release of BodyPix, an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. With default settings, it estimates and renders person and body-part segmentation at 25 fps on a 2018 15-inch MacBook Pro, and 21 fps on an iPhone X. […]
This might all make more sense if you try a live demo here.
“Man, that dude looks eerily like David Salesin,” I thought the other day as I was getting coffee, “but nah, he’s wearing a new-employee badge. But wait, holy crap… that dude is David Salesin wearing an employee badge!”
Perhaps you don’t know his name, but for 11+ years David (a tango-dancing Aikido black belt) led a wing of Adobe Research, and we collaborated on more projects than I can begin to count. Now he’s at the Goog (having led Snapchat research in the interim), teaming back up with several of our fellow Adobe alums. I can’t wait to see what he does here!
Last Monday I began work as a Principal Scientist / Director at Google AI Perception, based in San Francisco. I’m excited to collaborate with so many good friends and colleagues who are already at Google, and, in time, to hire many more. Google’s products and reach are incredibly broad, and so is the mandate for my lab: I look forward to continue inventing tools for creative expression, as well as to begin working on some brand new far-reaching challenges potentially well outside my area of expertise, like applying AI to healthcare. In my new role, I’m energized to grow in new ways, working on projects that, in Larry Page’s words, are “uncomfortably exciting”!
I’m bemused/amused to see this once-obscure (to non-speakers of Japanese) term now getting verbed in an Apple ad that’s racked up nearly 20 million views since Friday. (“This is the strangest life I’ve ever known…”)
Johnny Schaer (Johnny FPV) is a pro drone racer. His drones are designed to be light, quick, nimble, fly upside down and through all kinds of crazy flightpaths that DJI’s drones could never achieve. And when somebody with the skill of Johnny turns on the camera, that’s when you get results like the video above.
To shoot the footage, Johnny used a drone built around the AstroX X5 Freestyle Frame (JohnnyFPV edition, obviously) frame with a GoPro Hero 7. It has no GPS, no gimbal, no stabilisation, no collision avoidance, none of those safety features that make more commercial drones predictable and easy to fly.
David Oyelowo presents Scientific and Engineering Awards to David Simons, Daniel Wilk, James Acquavella, Michael Natkin and David M. Cotter for the design and development of the Adobe After Effects software for motion graphics.
And Dave’s mom is right: They do deserve “that Nobel Prize.” 😌
ARCore’s new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.
“Why do you keep looking at King Midas’s wife?” my son Finn asked as I was making this GIF the other day. :-p
We’re working to advance the global adoption of renewable energy by creating kites that efficiently harness energy from the wind. After more than a decade developing our energy kite technology on land, I’m thrilled to share that we’re now partnering with Shell to bring Makani to offshore environments. As we take this next step towards commercialization, we’ll also be moving on from the Moonshot Factory, our home for the last five years, to become an independent business within Alphabet.
The composite red, green, and blue value of every pixel in a digital photo is created through a process is called demosaicing.
Enhance Details uses an extensively trained convolutional neural net (CNN) to optimize for maximum image quality. We trained a neural network to demosaic raw images using problematic examples […] As a result, Enhance Details will deliver stunning results including higher resolution and more accurate rendering of edges and details, with fewer artifacts like false colors and moiré patterns. […]
We calculate that Enhance Details can give you up to 30% higher resolution on both Bayer and X-Trans raw files using Siemens Star resolution charts.
Hmm—I’m having a hard time wrapping my head around the resolution claim, at least based on the results shown (which depict an appreciable but not earth-shattering change). Having said that, I haven’t put the tech to the test, but I look forward to doing so.
We’re experimenting with a way to solve this problem using a technique we call global localization, which combines Visual Positioning Service (VPS), Street View, and machine learning to more accurately identify position and orientation. […]
VPS determines the location of a device based on imagery rather than GPS signals. VPS first creates a map by taking a series of images which have a known location and analyzing them for key visual features, such as the outline of buildings or bridges, to create a large scale and fast searchable index of those visual features. To localize the device, VPS compares the features in imagery from the phone to those in the VPS index. However, the accuracy of localization through VPS is greatly affected by the quality of the both the imagery and the location associated with it. And that poses another question—where does one find an extensive source of high-quality global imagery?
What does the computer interface of the future look like? One bet from Google is that it will involve invisible interfaces you can tweak and twiddle in mid-air. This is what the company is exploring via Project Soli, an experimental hardware program which uses miniature radar to detect movement, and which recently won approval from the FCC for further study.
Almost looks deceivingly pleasant & prosperous in these lovely aerials:
Pyongyang is by far the weirdest and strangest place I have ever been to. At the same time it’s also one of the the most interesting and intriguing places and unlike anywere else I have ever been to. You go there with 100 questions and you return with 1000!
The Childish Gambino Playmoji pack features unique moves that map to three different songs: “Redbone,” “Summertime Magic,” and “This is America.” Pixel users can start playing with them today using the camera on their Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3 and Pixel 3 XL.
And with some help from my team:
He even reacts to your facial expressions in real time thanks to machine learning—try smiling or frowning in selfie mode and see how he responds.
Given my experience with my deaf son, who uses cochlear implants, lip-reading, and sign language to communicate with others, I can tell you that these apps—unlike certain misguided Microsoft accessibility efforts, like Cortana screeching during Windows Setup—address real-world problems that impact many, many people. And that they are, thus, both well-intentioned and truly useful. Bravo, Google.
As we know of velociraptors, things tend to go awesome once creatures learn how to open doors. The Verge writes,
The key to the design is the use of interchangeable adhesives on the drone’s base: microspines for digging into rough materials like stucco, carpet, or rubble, and ridged silicone (inspired by the morphology of gecko feet) for grabbing onto glass. Both microspines and silicone ridges only cling to surfaces in one direction, meaning they can be easily detached. With these in place, the micro-drones can pull well above their 100-gram weight, exerting 40 newtons of force or enough to lift four kilograms (about eight pounds).
We present GridDrones, a self-levitating programmable matter platform that can be used for representing 2.5D voxel grid relief maps capable of rendering unsupported structures and 3D transformations. GridDrones consists of cube-shaped nanocopters that can be placed in a volumetric 1xnxn mid-air grid, which is demonstrated here with 15 voxels. The number of voxels and scale is only limited by the size of the room and budget. Grid deformations can be applied interactively to this voxel lattice by manually selecting a set of voxels, then assigning a continuous topological relationship between voxel sets that determines how voxels move in relation to each other and manually drawing out selected voxels from the lattice structure. Using this simple technique, it is possible to create unsupported structures that can be translated and oriented freely in 3D. Shape transformations can also be recorded to allow for simple physical shape morphing animations. This work extends previous work on selection and editing techniques for 3D user interfaces.
Heh—I love the fun that Cuban fashion brand Clandestina is having with the Chrome “no internet” dino. Here he dodges palm trees, pineapples, and old Chevys before finally colliding with his nemesis, connectivity (“3G”).