Hmm—let’s see what develops here. PetaPixel explains,
First, it can manipulate an image based on very basic coloring, sketching, or warping commands. So you can change the shape, color, and size of an object in just a brush stroke or two, with the final product maintaining as natural a look as possible.
Second, it can actually generate images based on a rudimentary sketch.
Check out more info & demos from Berkeley.
PetaPixel also points out a “neural photo editor” from researchers at the University of Edinburgh:
[I]t uses machine learning to predict and apply the changes you’re intending to make. For example, if you select a bright color and start painting over someone’s hair, it will assume you want to turn them blonde; being using longer brush strokes, and that blonde hair grows longer.
You simply select a color using their “contextual paintbrush” and have at it. The most basic inputs can produce extreme changes.