I’m a huge fan of Preet Bharara and his indispensable podcast, so it was a real pleasure to hear our old Adobe collaborator Dr. Hany Farid discuss the world of deepfakes, weaponized imagery, and what we can do about it. I think you’ll find it really interesting & valuable.
I’m now thinking about this constantly:
Your rate-of-learning is a better proxy for how successful you will be than your current compensation because it’s a leading rather than lagging indicator. https://t.co/w678HLoJE6
— Kyle Tibbitts (@KyleTibbitts) January 10, 2020
— John Nack (@jnack) January 12, 2020
And yeah, ¬”Satisfaction, but feeling of uselessness…”
Recently I learned about the Japanese concept of ikigai. We all have things that
• we're good at
• we can be paid for
• we love doing
• makes the world better
We all aim to be in a spot where we hit all 4. The diagram below helps explain how you feel when you hit 2 or 3 of 4 pic.twitter.com/2p0ZEOfxQp
— Dare Obasanjo (@Carnage4Life) September 3, 2019
Back in 2011, my longtime Photoshop boss Kevin Connor left Adobe & launched a startup (see NYT article) with Prof. Hany Farid to help news organizations, law enforcement, and others detect image manipulation. They were ahead of their time, and since then the problem of “fake news” has only gotten worse.
Now Adobe has teamed up with Twitter & others Content Authenticity Initiative, and last night they previewed Project About Face, meant to help spot manipulated pixels—and even maybe reverse the effects. Check it out:
This purchase is made up of a 1,600-megawatt (MW) package of agreements and includes 18 new energy deals. Together, these deals will increase our worldwide portfolio of wind and solar agreements by more than 40 percent, to 5,500 MW—equivalent to the capacity of a million solar rooftops. Once all these projects come online, our carbon-free energy portfolio will produce more electricity than places like Washington D.C. or entire countries like Lithuania or Uruguay use each year.
Our latest agreements will also spur the construction of more than $2 billion in new energy infrastructure, including millions of solar panels and hundreds of wind turbines spread across three continents. In all, our renewable energy fleet now stands at 52 projects, driving more than $7 billion in new construction and thousands of related jobs.
Particularly as the uncle of a little dude who uses a wheelchair, this news makes me very happy & proud:
Google announced this morning via blog post that it has partnered with the Christopher & Dana Reeve Foundation to give away 100,000 Home Mini units to people living with paralysis. The news is designed to mark the 29th anniversary of the Americans with Disabilities Act (ADA), which was signed into law on this day in 1990.
There’s a form on Google’s site for people who qualify and their caregivers. Interested parties must live in the United States to receive a unit.
This is pretty “F yeah”-magical.
What if speech impediments were no impediment to interacting with devices & making oneself understood? Google researchers (the crew behind the amazing Live Transcribe) have been working with folks affected by ALS, deafness, & other conditions to make their speech & even voice utterances work well with computers & other humans. Take a look:
Having watched my teammate Dimitri use Live Transcribe in meetings for the past year, I’m super excited to see it arrive:
[It’s] a free Android service that makes conversations more accessible through real-time captioning, supporting over 70 languages and more than 80% of the world’s population.
Here’s a deeper look into how it works.
Paul Thurrott writes,
Given my experience with my deaf son, who uses cochlear implants, lip-reading, and sign language to communicate with others, I can tell you that these apps—unlike certain misguided Microsoft accessibility efforts, like Cortana screeching during Windows Setup—address real-world problems that impact many, many people. And that they are, thus, both well-intentioned and truly useful. Bravo, Google.
I’m thrilled to say that the witchcraft my team has built & used to deliver ML & AR hotness on Pixel 3, YouTube, and beyond is now available to iOS & Android developers:
For Portrait mode on Pixel 3, Tensorflow Lite GPU inference accelerates the foreground-background segmentation model by over 4x and the new depth estimation model by over 10x vs. CPU inference with floating point precision. In YouTube Stories and Playground Stickers our real-time video segmentation model is sped up by 5–10x across a variety of phones.
We found that in general the new GPU backend performs 2–7x faster than the floating point CPU implementation for a wide range of diverse deep neural network models.
A preview release is available now, with a full open source release planned for the near future.
I often note that I came here five (five!) years ago to “Teach Google Photoshop,” and delivering tech like this is a key part of that mission: enable machines to perceive the world, and eventually to see like artists & be your brilliant artistic Assistant. We have so, so far to go, and the road ahead can be far from clear—but it sure is exciting.
“Please, please don’t come to Google and waste your time.”
I tell this to promising interview candidates. That is, I hope they come here, but it’s waaaaay too easy to fall into a velvet fog: you get free food, good money, something for your parents to brag about… but you wake up one day and realize that you’re polishing some goddamn stupid widget 9 levels deep in who-knows-what system, and you think, “Is this why I was put on earth?” This doesn’t have to happen, and indeed people often do amazing things instead—but it’s anything but guaranteed.
I always think of the amazing monologue in Walk The Line (starts around 1:30 in the clip below). If you had one song to sing before you’re dirt, are you telling me this would be it?
Now go find your song & sing the hell out of it.
Filed under “Shit That Actually Matters.”