AI & RoboticsNews

Janelle Shane explains AI with weirdness and humor, in book form

If, like many people these days, you’re trying to get a firmer understanding of what AI is and how it works but are secretly panicking a little because you’re struggling with terminology so opaque that you’re lost before you get to Markov chains, you may want to crack open Janelle Shane’s new book. She recently sat down with VentureBeat to talk about the book, whose title, You Look Like a Thing and I Love You, is actually an AI-generated pickup line.

Shane maintains the AI Weirdness blog and combines knowledge from her Ph.D. in electrical engineering, fascination with AI, and propensity for slightly deadpan absurdist humor to explain AI in a way that is both hilarious and easy to understand. More importantly, she uses humor as a frame to display how AI is actually dangerously bad at most things we wish it could do. Her take is a refreshing counter to the often overly fantastical notions about AI, its looming sentience, and its capacity for either utopia or dystopia.

Although the book walks the reader through what AI is and explains how AI “thinks” and how we should think about how AI thinks, it’s full of giggle-inducing hand-drawn illustrations and endless comical examples. Shane’s signature move is using neural nets to generate things like new ice cream flavors, knitting patterns, and recipes. The results are usually funny and bizarre and are things that could exist, like Lemon-Oreo ice cream or a Cluckerfluffer sandwich (chicken, cheese, and marshmallow). It’s the least creepy way to show how AI so often falls off a cliff into the uncanny valley.

If you can make it past the recipe for Basic Clam Frosting without laughing (page 18), there might be something wrong with you.

These flavors are not delicious

Above: “These flavors are not delicious.” — Janelle Shane at her TED Talk

Image Credit: TED

This interview has been edited for length and clarity.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

VentureBeat: When I first came across [the book], it seemed like it was going to be very educational. And also very accessible. And I thought that sounded fantastic. I know this all started with AI Weirdness. Why did you start that blog? How did that come about?

Janelle Shane: I saw someone else named Tom Brewe who had used one of these text-generating neural nets to do cookbook recipes. It was one of these situations where I’m laughing so hard the tears are streaming down my face. I have now read all of the recipes he’s generated — there now needs to be more recipes! And [I thought] I need to learn how to generate them. So that’s kind of — it was more like, I thought this was hilarious. I generated this thing that I’m like, “Okay, now I have to show people.” I figured it would be a few of my friends who would read it. I had the blog from my Ph.D. days that mostly had electron microscope pictures on it. So I just threw some of these experiments on there, [thinking], you know, “Here’s a blog, I’ll just put this stuff on there.” And then, yeah, to my surprise, other people started reading it, like, people I didn’t even know personally.

VentureBeat: At what point did the book emerge as a thing, coming out of the blog?

Shane: In part it was because I got [to] talking to an agent and to a publisher who were interested in there being this book. But, you know, the other motivation too was, again, repeatedly getting the comments of people [who were] confused because the AI on my blog was making all sorts of mistakes and “Isn’t AI supposed to be smart?” So this book is kind of a way to explain how these things can be true at once and what “smart” means in the context of AI. And what it definitely doesn’t mean.

VentureBeat: So they didn’t get the joke? (Which is also kind of funny.)

Shane: It was more like they were confused, like they got that, “Okay, the AI is making mistakes, and haha that’s funny but — why is this one so ? And, you know, is this a different kind of AI?” This is stuff that — I was recommending movies or labeling photos and things.

VentureBeat: And how did you come to the illustrations? Was that kind of your thing, or did the publisher kind of come in and do that?

Shane: Those are all my own illustrations.

VentureBeat: How did you come up with that sort of “language” — it’s a really funny and educational sort of artistic language. How did that come to you?

Shane: This is how I explain things to myself in my own voice. I remember kind of overhearing when I was doing rehearsals, actually, for the TED conference, and I overheard a couple of the coaches saying to each other, “No, she really does sound like that!” So yeah, it’s not an act! This is really how I talk about these things and how I like to write them down for myself so they sort of make sense. I’m telling myself this story to sort of put my thoughts in order.

Definitely, this approach where focusing in on examples and especially on these kind of memorable things — that’s what’s going to stick around after you’re done reading the book. It’s not like my bullet-pointed principles of AI weirdness was going to stick in [your] mind as much as the story about, you know, the AI that got confused about whether we’re supposed to be identifying fish and whether human fingers holding the trophy fish are part of the fish. Or, you know, the AI that was told to sort a list of numbers and deleted the list, therefore eliminating all the sorting errors, thus technically getting a perfect score.

So that sort of focusing in on these really weird, very non-human sorts of things to do … I think in part comes from the first way that I encountered machine learning, which was as a freshman in college in 2002, sitting in on a lecture by a guy who’s studying evolutionary algorithms. His name is Professor Erik Goodman, and he was explaining his research and telling just these same kinds of stories. And I remember that just really grabbed me and was so weird and memorable and made sense, but was also difficult to picture, you know — kind of like studying the natural world in a way. And so that really kind of was the way I got drawn into machine learning.

VentureBeat: It struck me that the way you explained how neural nets work is very Amelia Bedelia-like. [Amelia Bedelia is a children’s book character who takes everything people say literally, thus exposing the humor in things like idioms.] It’s almost like the math version of how Amelia Bedelia plays with language.

Shane: Yeah, you know, that’s one of the things that’s fun about neural nets, is that they don’t have the context for the exact problem they’re trying to solve. They take things entirely literally because that’s all we know how to do. And then you get some really weird situations, this really weird humor that kind of arises from that. So yeah, it definitely pushes some of the same buttons.

VentureBeat: You use real-world examples, and sort of absurdist humor, to show what happens when AI goes awry. And it’s kind of easy to vet the output, because it kind of comes out as gibberish, right? We can tell that the recipe doesn’t make sense, and it’s fun to make it and sort of point that out, but I wonder about less whimsical examples. Because, you know, there’s a lot of researchers and practitioners who are doing the same things. And I guess the concern that I had is: I can tell this output is silly, but can they? And how are they able to vet their own results in kind of the same way that we can when we have a hilarious recipe?

Shane: Yeah, that is the thing about these kinds of experiments that kind of led me into explaining AI. When you have these absurdist, real-world examples … It’s not deep into the statistics of handing out loans or, you know, looking at resumes, but … it’s messing up something that we all can kind of see. That is a really helpful way of seeing what kinds of limitations there are and how these same sorts of limitations pop up over and over again.

They’re making these same sorts of mistakes, and the question is, are they looking at them closely enough to catch these mistakes?

VentureBeat: Are there established solutions for people to vet that output, or is that still a big problem to solve?

Shane: The only general rule I can say is, never trust a neural net. Never trust it to know what you asked it. Never trust it not to take a speedy shortcut. And then there are ways of digging in and finding out whether it has solved this correctly. That varies depending on what problem you’re trying to solve.

So, in image recognition, there are some really nice tools for explainability, where [the neural net] highlights parts of the images it’s using to make decisions and [you can] say, “Oh no, it’s supposed to be looking at a dog, and instead it’s looking at the grassy background,” or something like that.

There are those kinds of tools. Some feel explainability is really hard to build in, and in some cases your main tools may be running statistics on your output and saying, “Okay, we don’t know how it’s making decisions on these loan applications. We don’t know what rules it’s applying, but we can at least run a whole bunch of test applications. Throw them at it. See if there are any trends in the way that it makes the decision, so we know how the decision should turn out if the thing isn’t biased.”

I know there are services out there now systematically testing to see whether one’s algorithm is biased or has some other kinds of outcomes. And I know that there’s a lot of companies that just plain don’t do that. You know, I would like to see there be more of a top-down requirement that these kinds of algorithms be vetted by some standardized process — at least demonstrate that they’re not problematic. Or they’re not as problematic as they could be.

VentureBeat: That kind of speaks a bit to what you got into in the last couple of chapters, especially when you kind of hammered on human-faked AI: Sometimes these [AI] companies are like, “We can do this, trust us,” and then it turns out they can’t do it, and then they kind of have to panic and either give back all the startup money because they failed or they get in there and kind of mess with it and fix it by hand. That speaks to a bit of the “black-boxness” of it.

I’m coming from a journalist perspective, where I’m not an expert. And so we have to have, you know, a certain amount of trust when someone tells us their thing works … it’s tricky. So I’m wondering, just in your opinion, how pervasive do you think that problem is in the market?

Shane: I think it’s pretty pervasive, actually. There are some fields that are perhaps more prone than others to these kinds of problems. I would look really closely at anybody who automatically sorts resumes. We’ve seen some case studies, like this one from Amazon, where they volunteered that they had a tool they decided not to use because it was so hard to get it to stop discriminating.

It’s kind of like a rare case where we get to see the … struggles of the engineering team, trying to get rid of this thing’s tendency to try to copy the bias of training. For every Amazon that tells us, “Whoops, we had a problem, we decided to mitigate it by — in this case — not using this kind of solution at all,” we’ve got a bunch of companies out there who are doing basically the same kind of application, and essentially the same kind of approach, and they haven’t published any numbers on what kind of bias they have or what specifically they’ve done to reduce bias. So, yeah, there’s a lot of unfounded claims out there — you say, “Oh yes, use our computer algorithm and we will take care of your HR team’s hiring bias,” without any proof that using AI is actually going to help.

VentureBeat: Pretty quickly a trend has emerged, where [makers of AI products and services] are like, “Oh no, there’s humans in the loop for sure.” And whether it’s lip service or if they really believe it, they’ll say, “Yeah, look, we have to have the human touch,” and they don’t come out and say “We can’t trust our models.” With that in mind, do you think that sometimes companies are kind of just pivoting to that messaging without really addressing the core problem?

Shane: There are definitely a lot of AI applications where human curation is needed as a kind of quality control. Definitely in all the AI-generated humor that I’m doing, like … most of it’s not interesting. You need a human. I would never train a text-generating algorithm and then just, like, give it to children to have fun with it. It is so hard to sanitize! When people talk to me about doing this kind of project, I always say, “You don’t want this thing talking to the public.” Because it will say something terrible, and you need the human layer in between to stop that from happening.

And, you know, in so many cases, especially with some kind of creative application — those are the ones I know the most about — I definitely see, you know, it’s a time saver, but you still need a human. Language translation [is the] same sort of thing; human translators now use machine translation for the first draft. And it’s close, but it is definitely not ready without some human quality cleanup.

But then we have this other case, going back to having a human in the loop to try to prevent the algorithm from being biased. And that’s kind of interesting; circling back to [the idea that] the humans [who built the algorithm] are biased, the algorithm’s biased and needs the humans. And to that I would just say, “Well just, you know … show me the data.”

We can run test data through this thing. That’s the beauty of having an algorithm without a human running these decisions. We can give it 10,000 loans or 10,000 resumes or 10,000 social media profiles and see if there are any trends. And If someone hasn’t done that, I worry that they’re not serious about whether or not their algorithm is flawed in a potentially harmful way.

VentureBeat: What do you think, if anything, has changed in the field — like in terms of research, impact, deployment, whatever — since you finished writing the book? (I know it was 2019, which is recent.)

Shane: Oh man, things change so quickly. One of the things I’ve been encouraged to see is more pushback against some of the bad applications and more communities and people stepping up to bat against this and governments also trying to step in — especially [the] European Parliament trying to step in and do the right things.

So I’m encouraged that we’re hopefully going to be a bit out of the snake-oil realm. … There’re now more applications out there to worry about, like with facial recognition. It’s … not great,  but it’s working better, so there are different aspects to worry about, versus in my book, where the concern was [that] it doesn’t even work. Well … now it kind of works, and that’s also a problem.

VentureBeat: What are your top five favorite random things generated by a neural net?

Shane: Oh man. [pause] I really like the experiments where people take the neural net stuff at the starting point and just run with it. There was an experiment I did — AInktober — where it was generating drawing boxes during the month of October. People drew the most amazing things.

There was one called SkyKnit, where it’s generating knitting patterns, and people had to try and debug all of the mistakes and make them into things, and it was just glorious.

And then Hat 3000 did the same thing with crocheting. Turns out, using crocheted hats was an unwittingly bad move that tended to create universe-devouring, hyperbolic yarn monstrosities.

There was one example I did where I was generating cookies, and people actually made recipes based on the illustrations, like spice biggers and walps and apricot dream moles.

The paint colors keep coming back again. Using paint colors gives me the opportunity to print the word “TURDLY” in giant letters across presentation screens.

Signed copies of the book are available from Boulder Book Store.


Author: Seth Colaner.
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Segway robot mowers from $799, flagship Jackery power station bundle at new $1,999 low, LG smart all-in-one washer/dryer, more

Cleantech & EV'sNews

Fiat 500e and Subaru Solterra now cost less than $300/month to lease

Cleantech & EV'sNews

Tesla Robotaxi rumors: butterfly doors, two seats, robovan, and more

CryptoNews

Classic Games Meet Web3: Snake on Solana Games

Sign up for our Newsletter and
stay informed!