Heh—this looks rather brilliant:
Re:scam can take on multiple personas, imitating real human tendencies with humour and grammatical errors, and can engage with infinite scammers all at once, meaning it can continue any email conversation for as long as possible. Re:scam will now turn the tables on the scammers by wasting their time, and ultimately damage the profits for scammers.
“So, what would you say you… do here?” Well, I get to hang around these folks and try to variously augment your reality:
Research in Machine Perception tackles the hard problems of understanding images, sounds, music and video, as well as providing more powerful tools for image capture, compression, processing, creative expression, and augmented reality.
Our technology powers products across Alphabet, including image understanding in Search and Google Photos, camera enhancements for the Pixel Phone, handwriting interfaces for Android, optical character recognition for Google Drive, video understanding and summarization for YouTube, Google Cloud, Google Photos and Nest, as well as mobile apps including Motion Stills, PhotoScan and Allo.
We actively contribute to the open source and research communities. Our pioneering deep learning advances, such as Inception and Batch Normalization, are available in TensorFlow. Further, we have released several large-scale datasets for machine learning, including: AudioSet (audio event detection); AVA (human action understanding in video); Open Images (image classification and object detection); and YouTube-8M (video labeling).
[Via Peyman Milanfar]
When they’re not savagely trolling me (“Hey Google, play Justin Bieber!”—then running away), the Micronaxx really enjoy playing the “I’m Feeling Lucky” trivia app with us. Therefore I was charmed to get invited to brainstorm with my Toontastic friends & others from Google’s kid-focused group, coming up with all kinds of ideas for other family-oriented audio apps. Now that work is starting to come to fruition, enabling 50+ new games & activities on Google Home:
Google says the Assistant is now better at recognizing kids’ voices; and like adults, it’ll be able to distinguish between them so that it can customize responses to each person. To do this, kids will need a Family Link account, which around Google accounts for kids under 13 that allow for parental supervision.
Check it out:
Running Content-Aware Fill over time has traditionally produced results that are, um, more artistic than useful. Check out this old weirdness:
The trick (well, one of many, I’m sure) is to make the results temporally coherent, so that elements line up across frames. Looks like Adobe’s on their way to licking that problem in “Project Cloak”:
As rad as now-venerable (!) Content-Aware Fill tech is, it’s not semantically aware. That is, it doesn’t pay attention to what objects a region contains (e.g. face, clouds, wood), and so it can produce undesirable results. Here Adobe’s Jiahui Yu shows off a smarter successor, DeepFill:
Watching the little “heart” portion of the demo, I can only imagine what Russell Brown will do with this tech.
Question, though: If Content-Aware Phil is passé, will we see the rise of Deep Phil, below? (And yes, I could use some quick style-transfer integration in Photoshop to help with a piece like this. Chop chop, Adobeans. :-))
To the moon, Alice!—and points beyond:
Now you can visit these places—along with many other planets and moons—in Google Maps right from your computer. For extra fun, try zooming out from the Earth until you’re in space!
Explore the icy plains of Enceladus, where Cassini discovered water beneath the moon’s crust—suggesting signs of life. Peer beneath the thick clouds of Titan to see methane lakes. Inspect the massive crater of Mimas—while it might seem like a sci-fi look-a-like, it is a moon, not a space station.
Creeptastically impressive stuff from researchers at Tel Aviv University & Facebook:
ILM have provided a taste of some of the 1750 visual effects shots that went into Rogue One:
I’m stupid-excited to say that I’ve just joined Google’s Skynet Machine Perception team to build kickass creative, expressive experiences, delivering augmented reality to (let’s hope) a billion+ people. I told you sh*t just got real. 🙂
Now, the following career bits may be of interest only to me (and possibly my mom), but in case you’re wondering, “Wait, don’t you work on Google Photos…?”
Well, like SNL’s Stefon, “I’ve had a weird couple of years…”
The greatly smoothed version goes basically like this:
- I joined Google in early 2014 to work on Photos. I liked to say I was “Teaching Google Photoshop,” meaning getting computers to see & synthesize like humans (making your Assistant your artist!). Among other things, we created a brand-new image editor, did some early AR face-painting work (a year+ ahead of Snapchat et al), and made movies for tens of millions of people.
- After a bit over a year, I wanted to explore some crazier photo- and video-related ideas (stuff not ready for Photos to include then, if ever), so I left the team & walked across the hall to work with & learn from Luke Wroblewski. Thus I was “working at Google on photos, just not Photos.” This was a subtle distinction, and as I was working on secret stuff, I didn’t spend time publicizing it. I remained closely involved with the ex-Nik Photos folks in building out Snapseed & the next rev of the new editor we’d started.
- Meanwhile I spent the better part of the next year thinking up, prototyping, and iterating on a bunch of little photo apps. It was a tough but enlightening process. I know we were on to something, but I also felt like Edison saying some variant of “I have not failed. I’ve just found 10,000 ways not to make a light bulb.”
- Somewhat tired from the process & eager to make concrete contributions, I was set to join an imaging hardware team. When project plans changed, however, I agreed to help improve photography experiences on social apps including Google+.
- Having witnessed on Photos the massive importance of speed, I teamed up with my future teammates in Research to build out the RAISR machine-learning library and ship it in Google+, saving users immense amounts of bandwidth (critical in the developing world).
- Since then, and up until this week, I’ve been focusing on enterprise social needs. Though it wasn’t an area I sought out, I ended up really digging the experience, and I look forward to eventually sharing some of the rad stuff my team was building.
- And then, Google bought this little company in Belarus & my old Research friends came calling…
So now we’ve come full circle, and to capture my feelings, I’ll cite SNL yet again. Wish me luck. 🙂