These include virtual tours of Citi Field in New York and Oriole Park at Camden Yards in Baltimore, both of which are narrated by MLB Network’s Heidi Watney. You can also get behind-the-scenes access with career tours that showcase the lives of a baseball beat reporter and television broadcasters. We’re also bringing you a Statcast tour, so you can geek out Moneyball-style with the math and physics behind the game.
“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:
Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.
We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).
We’ve redesigned Science Journal as a digital science notebook, and it’s available today on Android and iOS.
With this new version of Science Journal, each experiment is a blank page that you can fill with notes and photos as you observe the world around you. Over time, we’ll be adding new note-taking tools… We’ve added three new sensors for you to play with along with the ability to take a ”snapshot” of your sensor data at a single moment in time.
Honestly, I hope that my friends making imaging tools see things like MugLife (as well as automatic image selection & extraction, etc.) and say “Holy shit, it’s not the 90’s anymore; time to up our game.”
Paul Asente is an OG of the graphics world, having been responsible for (if I recall correctly) everything from Illustrator’s vector meshes & art brushes to variable-width strokes. Now he’s back with new Adobe illustration tech to drop some millefleurs science:
PhysicsPak automatically fills a shape with copies of elements, growing, stretching, and distorting them to fill the space. It uses a physics simulation to do this and to control the amount of distortion.
The new video for rock band Spoon’s “Do I Have to Talk You Into It” consists of lead singer Britt Daniel being rapidly morphed, deformed, beautified, clone-stamped, liquified, and peeled apart in Photoshop. At one point, Daniel is transformed into a coyote, the Photoshop interface drops away for a split second, and we just see some video of a snarling coyote in the woods. Why not?
DxO plans to continue development of the Nik Collection. The current version will remain available for free on DxO’s dedicated website, while a new “Nik Collection 2018 Edition” is planned for mid-next year.
“The Nik Collection gives photographers tools to create photos they absolutely love,” said Aravind Krishnaswamy, an Engineering Director with Google. “We’re thrilled to have DxO, a company dedicated to high-quality photography solutions, acquire and continue to develop it.”
DxO is already integrating Nik tech into their apps:
The new version of our flagship software DxO OpticsPro, which is available as of now under its new name DxO PhotoLab, is the first embodiment of this thrilling acquisition with built-in U Point technology (video).
Having known them as Photoshop developers, I was always a big fan of the Nik crew & their tech. (In fact, their acquisition by Google was instrumental in making me consider working here.) I wanted to acquire them at Adobe, and I was always afraid that Apple would do so & put U Point into Aperture! ¯\_(ツ)_/¯
The desktop plug-ins, however, were never a great fit for Google’s mobile/cloud photo strategy, and other than Analog Efex, none had been improved since 2011 (more than a year before Google acquired them). I know that Aravind Krishnaswamy (badass photog, Photoshop vet, eng manager for Google Photos) and other went many extra miles to find a good new home for the Nik Collection, and I’m really excited to see what DxO can do with it. On behalf of photographers everywhere, thanks guys!
When they’re not savagely trolling me (“Hey Google, play Justin Bieber!”—then running away), the Micronaxx really enjoy playing the “I’m Feeling Lucky” trivia app with us. Therefore I was charmed to get invited to brainstorm with my Toontastic friends & others from Google’s kid-focused group, coming up with all kinds of ideas for other family-oriented audio apps. Now that work is starting to come to fruition, enabling 50+ new games & activities on Google Home:
Google says the Assistant is now better at recognizing kids’ voices; and like adults, it’ll be able to distinguish between them so that it can customize responses to each person. To do this, kids will need a Family Link account, which around Google accounts for kids under 13 that allow for parental supervision.
The last time I recall charting features in Adobe Illustrator getting an update, Bob was running for president—in 1996. Later (c. 2000), Illustrator & ImageReady (later Photoshop) added the ability to bind text objects & shapes to variables. That would have been a godsend in my old graphics production life, but the world didn’t seem to take much notice.
Figuring that we were never going to get around to doing something natively in the apps, I proposed enabling HTML or Flash layers right on the canvas of Adobe design apps. That way a single HTML or SWF GUI could run right in Illustrator, Photoshop, InDesign, etc.—and remain alive & dynamic when exported. You could argue that I was on crack, or you could argue that had we gone that way, we’d have had great charting a decade ago—or both.
But may the future bury the past: it looks like Adobe is at last getting serious about delivering great infographic-making tools. Check out this sneak of “Project Lincoln”:
As rad as now-venerable (!) Content-Aware Fill tech is, it’s not semantically aware. That is, it doesn’t pay attention to what objects a region contains (e.g. face, clouds, wood), and so it can produce undesirable results. Here Adobe’s Jiahui Yu shows off a smarter successor, DeepFill:
Watching the little “heart” portion of the demo, I can only imagine what Russell Brown will do with this tech.
Question, though: If Content-Aware Phil is passé, will we see the rise of Deep Phil, below? (And yes, I could use some quick style-transfer integration in Photoshop to help with a piece like this. Chop chop, Adobeans. :-))
Today, we’re putting that same 3D model into an experience for everyone to explore. We call it Access Mars, and it lets you see what the scientists see. Get a real look at Curiosity’s landing site and other mission sites like Pahrump Hills and Murray Buttes. Plus, JPL will continuously update the data so you can see where Curiosity has just been in the past few days or weeks. All along the way, JPL scientist Katie Stack Morgan will be your guide, explaining key points about the rover, the mission, and some of the early findings.
If TensorFlow, PDAF pixels, and semantic segmentation sound like your kind of jam, check out this deep dive into mobile imaging from Google research lead Marc Levoy. He goes into some detail about how the team behind the new Pixel 2 trains neural network, detects depth, and synthesizes pleasing, realistic bokeh even with a single-lens device. [Update: There’s a higher-level, less technical version of the post if you’d prefer.]
Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon’s crust—suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes. Inspect the massive crater of Mimas—while it might seem like a sci-fi look-a-like, it is a moon, not a space station.
“Dogs and cats clustered together—mass hysteria!” Google Photos can now search by breed (e.g. Maine Coon, Labrador), and it clusters pets alongside people:
When you want to look back on old photos of Oliver as a puppy or Mr. Whiskers as a kitten, you no longer need to type “dog” or “cat” into search in Google Photos. Rolling out in most countries today, you’ll be able to see photos of the cats and dogs now grouped alongside people, and you can label them by name, search to quickly find photos of them, or even better, photos of you and them. This makes it even easier to create albums, movies, or even a photo book of your pet.
Curiscope takes introspection to a whole new level—and the puns’ll get under your skin:
Virtuali-Tee is a magic lens into a world inside the body. View through our free app on your phone or tablet to unlock a portal into your body’s vital organs. Jump into the pumping heart of an awesome anatomical adventure that brings learning to life in fully animated 3D using augmented and virtual reality technologies. Take a deep breath, dive into the bloodstream, and see for yourself.
Last summer, our team threw on the Google Trekker and explored the park’s incredible terrain—it was the furthest north Street View has ever gone. Wilderness and extreme isolation characterize this area, where fewer than 50 people visit each year. The park’s name itself translates to “the top of the world” in Inuktitut, the local indigenous language.
The app’s available on Android, too. Android Police writes, “It’s is essentially a GIF camera, but the app stabilizes the video while you’re recording. You can record for a few seconds, or use the fast-forward mode to speed up and stabilize longer videos.”
Not to be outdone, Google Photos on Web, iOS, and Android now displays Live Photos as well as Motion Photos from the new Pixel 2, giving you a choice of whether to display the still or moving portion of the capture. Here’s a quick sample on the Web. Note the Motion On/Off toggle up top.
I’m thrilled to have joined the team behind Motion Stills, so please let us know what you think & what else you’d like to see!
Photographer Levon Biss was looking for a new, extraordinary subject when one afternoon he and his young son popped a ground beetle under a microscope and discovered the wondrous world of insects. Applying his knowledge of photography to subjects just five millimeters long, Biss created a process for shooting insects in unbelievable microscopic detail. He shares the resulting portraits — each comprised of 8- to 10,000 individual shots — and a story about how inspiration can come from the most unlikely places.
The team made an identical 3D AR Balloon Dog covered in graffiti and geo-tagged it to the exact coordinates, “as if the result of an overnight protest” says Sebastian. “It is vital to start questioning how much of our virtual public space we are willing to give to companies,” he continues.
“A humble thing, but thine own,” Vin Scully used to say, and I’m happy to note that one of the photography-related features I helped shepherd through during my time on enterprise social has launched.
Photographers told us that the new Web UI for Google+, while welcome for offering features like zoom & photo sphere support, made it harder to see the context on photos & to have conversations around them. That’s now changed, providing a better balance between image & context. G+ tech lead Leo Deegan writes,
Over the next few days, we’ll be rolling out a new version of the photo lightbox on Google+ Web. The new lightbox, which appears for photos that are part of single-photo posts (not yet for multi-photo posts), places a greater emphasis on the photo caption and comments.
There are a couple of reasons why I’m happy about this new lightbox. First, the EXIF data (found in the “Show information” menu item) brings back the display of the photo date; the previous lightbox displayed the post date. And second, clicking on the back arrow brings you to the post no matter how you arrive at the lightbox (people who found their way to a lightbox without being able to get to the post know what I’m talking about).
Watch as the all-new Pixel 2 heads up the mountains in India to test out the new Fused Video Stabilization. The left side of the video has no stabilization at all, with optical image stabilization (OIS) and electronic image stabilization (EIS) turned off. The right side is the Pixel 2 with Fused Video Stabilization enabled.
The Pixel 2 has a feature called “frame look ahead” which analyzes each individual frame of a saved video for movement. Machine learning compares dominant movements from one frame to another and stabilizes accordingly.
The Google Pixel 2 is the top-performing mobile device camera we’ve tested, with a record-setting overall score of 98. Impressively, it manages this despite having “only” a single-camera design for its main camera. Its top scores in most of our traditional photo and video categories put it ahead of our previous (tied) leaders, the Apple iPhone 8 Plus and the Samsung Galaxy Note 8.
“I designed the font when I was 23 years old. I was right out of college. I was kind of just struggling with some different life issues, I was studying the Bible, looking for God and this font came to mind, this idea of, thinking about the biblical times and Egypt and the Middle East. I just started scribbling this alphabet while I was at work and it kind of looked pretty cool,” Costello said.
He added, “I had no idea it would be on every computer in the world and used for probably every conceivable design idea. This is a big surprise to me as well.”
First they add an actual Glyphs panel, now this? Dogs & cats living together, mass hysteria!
In this one-minute video, Adobe Creative Cloud introduces you to ‘Variable Fonts’, an open-type font format that allows for easy weight, width and slant customization—just drag the sliders until you get desired results.
Holoportation is “a new type of 3D capture technology that allows high quality 3D models of people to be reconstructed, compressed, and transmitted anywhere in the world in real-time.”
When combined with mixed reality displays such as HoloLens, this technology allows users to see and interact with remote participants in 3D as if they are actually present in their physical space. Communicating and interacting with remote users becomes as natural as face to face communication.