Transcript of “I am colorblind, and you can too!”

Author’s note: This is a transcript of a talk I gave at Queens JS. I wrote a colorblind Twitterbot, and I presented my findings in this talk. I’m recording it here, for posterity. To save you from my verbal tics, it’s not an exact transcription. I cleaned up the sentences more readable. I didn’t change the meaning of anything.

Link to source code

Link to slides

Welcome to my talk, “I am colorblind, and you can too!”

I am going to take you through the process of how I built a colorblind Twitterbot. Who am I? I am Jake Voytko. It’s very easy to find me online, I’m jakevoytko@gmail.com, @jakevoytko, I’m jakevoytko@any-service-you’ve-ever-heard-of. And I’m red-green colorblind. I actually have a severe version of it. Most people who are colorblind have a mild version, and it turns out that I don’t see much color at all. We’ll get into that later: just how drastic the difference is. For a while, I worked on Google Docs. But now, I’m funemployed and trying to learn new things, and work through some side projects. And this talk is some of the output of this time I’ve taken. I hope you enjoy it!

Before we get into it… I’m sure the colors on the projector are reproduced perfectly, but just in case they’re not, and there’s any confusion with the pictures, I have the Twitterbot running right now. It has all of the pictures of the talk. If you want to go to @JakeWouldSee, or tweet at it with an image in the tweet, it’ll send back the colorblind version of your image. But the problem is that I haven’t load tested it! It is probably going to fall over if everyone does it. But, we’ll see how that goes. It’ll be fun!

A rainbow kite

Normal vision

Rainbow kite, as seen by a protanope

Protanopic vision

So, the talk is divided into three sections. First, I want to talk about what people see when they see color. And once we know that, it’ll be easy to talk about what I see, and how it differs from what a normal person sees. Next, because my colorblindness is so severe, it’s easy to model. And I’ll draw some graphs and show you what that looks like. And finally, we’ll go over what the results are.

Part 1: how do people perceive color?

Red+green doors

Normal vision

Red/green doors with protanope vision

Protanopic vision

Do these two pictures look different to you? audience murmurs “yes” They do? They look almost identical to me. So this is funny. These are two doors that are at Google New York, where I used to work. And all of a sudden, one day they were painted and I didn’t realize it. Apparently, the door on the left is red, and the door on the right is green. In the images, the normal version is on the left, and the colorblind version is on the right. And you’re only supposed to walk through the green door. That was an interesting “Today I learned” for me. audience laughs at my life

So, let’s get into it! How do eyes normally work? So, all of you who can see color normally, you have these three cone cells in your eyes. They each detect different parts of the color spectrum. There are the long-wavelength ones that detect the colors that I list here: red, orange, yellow, and green. You have medium-wavelength ones that pick up, very strongly, yellows and greens, and you have these other short-wavelength ones that pick up blue and cyan. Basically, your brain takes the responses from these three cells and combines it into a color.

To give an example, if the long cell is going off the charts, “I’m getting a really strong reaction!” and the medium and short ones are getting a weak reaction, your brain will take that information and say, “the color that you are looking at is red.” And that part of your vision will be interpreted as red.

It’s useful to know that when we’re doing work with colors, we’re not doing work with the full spectrum, but we’re doing work with the computer’s representation of colors. It’s useful to know how computers represent colors. It’s actually very simple.. you may remember from school that there are three primary colors. And it’s not a coincidence that there are three primary colors, and three types of cone cells in your eyes. The idea behind the primary colors is that each one of them targets one of the cone cells in your eye. If you manipulate the amount of that particular one, it’ll cause that cone cell to respond strongly or not.

Using that concept, you can reproduce most colors that humans are capable of seeing. You can’t reproduce all of them, which I found interesting. That was something that I learned when I did this project. So, most of you work with Javascript, and I assume all of you have worked with frontend stuff. You may know that RGB (Jake: sRGB specifically) is a very common way to represent colors. Red, green, and blue are the primary colors. And if people in the back heard people in the front laugh, it’s because I show pictures of the primary colors on the bottom. And they are red, green, and the landmark Miles Davis album “Kind of Blue.”

So.. what’s different about me? You guys have three type of cones in your eyes. I only have two types of cones in my eyes. I was talking to my friend Sam (who’s in the back right now) about this, and he was telling me I’m actually missing the gene that codes for these long cells. The ones that respond very strongly to red, and also pick up some greens and yellows, and oranges, is just completely missing for me. When my brain is processing the colors, I can’t really differentiate a lot of these colors. The consequence of this is that I don’t really see reds much at all, and I get weaker responses for many of these other colors.

So, to quantify how different my vision is.. if you ignore brightness; bright green equals green, equals dark green, and just look at wavelengths of light.. normal people can differentiate about 150 colors. In your head, guess how many colors I can see with only 2 cones. someone doesn’t follow instructions and yells “100!”. I see 17. audience gasps at how dull my life must be. But that’s just pure wavelengths of light.

To talk about how different I am from a normal colorblind person: have you seen those videos where people put on these glasses, and they cry at sunsets because they’re suddenly so beautiful? Well, those people have a partial response to one of the cones. They have two really strong cones, and one really weak one. The other two kind of dominate the third one. Those Enchroma glasses attenuate the signal response that the other two cones have so that you get a more balanced response between the three colors. You can get a much more balanced color perception, and you can differentiate more wavelengths of light than you could before.

Peanut butter

Normal vision

Peanut butter with protanope vision

Protanopic vision

Another interesting fact that I found out… so; I took this colorblind test for the Enchroma glasses, and it told me that I’m “too colorblind for these glasses to work. But as a consolation prize, here’s a bunch of information about your colorblindness.” I’m reading through it, and it’s mostly stuff that I’ve read before. But at one point, there was this one sentence that said something like, “protanopes (this is the type of colorblindness I have) will even perceive color in the wrong part of the spectrum. For instance, they will perceive peanut butter as green.” And that completely blew my mind, because for my entire life I have seen peanut butter as green. On the projector is the colorblind-processed version of the picture. I’m not sure what it looks like to you; I haven’t shown this picture to anyone. But this is about what I see when I seen peanut butter. Apparently peanut butter is brown. Who knew?

Part 2: Modeling colorblindness

So this slide is dense, so we’re going to spend some time on it. But this goes through how this is modeled. So instead of using RGB, there’s this other color space that is useful for working with color. And that means you can change any RGB pixel into something that lands somewhere in this colorspace. The cool thing about this colorspace, is that it separates luminance (the brightness) from chrominance (the color). To show you how colors land on this, you see this upside-down U thing, where the rainbow follows it

xyY colorspace

xyY colorspace

If you look inside, you’ll see that I’ve drawn this triangle. The corners are R, G, and B, which you can see through my childlike scrawl of handwriting. I have very meticulously and very scientifically reproduced this chart by hand. These are about where the primaries land, that your computer monitor uses. Anything that your computer monitor is capable of reproducing, is inside this triangle. You see there’s a bunch of stuff that your monitor can’t reproduce. But this is a little misleading; green and blue are next to each other in the color spectrum, but you see there’s quite a bit of distance on the curve between them. So there’s not much that can’t be reproduced by RGB.

xyY with confusion lines

xyY with confusion lines

How is this useful? On the bottom is how my program works. It, again, looks like a lot of childlike scrawl. But, it’s not hard when you know what it represents. Normal people can see everything that’s inside of this U. And all of my color vision lands on this one curve. These are the only things I am capable of seeing. (Jake: this is oversimplified) So you see all of these rays that meet at this one point, each ray represents all of the colors that I confuse. Any two colors on this line will look the same to me. My program calculates the line, and intersects it with this curve to produce the color estimate of what I see.

So, I made the first version of the algorithm, and the results were weird. Like, a lot of colors were way too bright. But I couldn’t find the bug; all of my code was correct. I checked it a thousand times. I couldn’t figure out what was going on! And finally I found this paper from 1944 that shed some light on it. audience laughs at how out-of-date my research is I know, right? But eyes weren’t different back then, it’s still good.

My entire life, I’ve always perceived red as being so much darker than green. And people when I was younger would always ask, “WHAT COLOR IS THIS? WHAT COLOR IS THIS?” And I was always able to tell apart red and green because reds were very dark. And people were like, “you’re not really colorblind”. Yes, I promise you. I am. And people would kind of nod when I said they were darker, but we were never talking about the same thing. Apparently, a lot of reds I see at 110 their actual brightness. Just because I’m completely missing that whole red wavelength cone receptor. At the bottom of 24 pages, the paper had this tiny little equation where you can model luminance. It uses another color space, XYZ, and it’s really easy to get that from RGB. And then you can produce the brightness that I see of the color.

At that point, you have enough to go back to RGB. I said that one color space (Jake: xyY) had all of the colors (Jake: waving my arms on a plane), and then all of the brightnesses (Jake: pointing out an axis orthogonal to that plane). And using those two bits of information, you can convert back to RGB, and get an actual estimate of what I see. For all of the “after” images, that’s what I’m doing.

Now that we have all that information, it’s pretty easy to write a Twitterbot. Are people sending it tweets? Is it still up? people in the audience nod “yes” OK cool! Good for it. I wrote it in node.js, which is why I’m here at all. A lot of the stuff here was really interesting to me. You know, I worked at Google for a while, and they take care of things like authentication for you. A lot of stuff like OAuth was stuff that I’ve never worked with.

So for anyone who’s worked with something like OAuth, you know that to run a bot on something like a Twitter account, you need to authenticate as the user. When the user grants the application permission, you get this nice little token and secret pair. Any request that you send to the Twitter API, you send the token with it. You sign it with the secret and produce this hash. And that’s enough for Twitter to say, “yes.. the bot is allowed to do this.” At any point, the user could revoke the token and all of the requests would start failing. So if someone figured out the password for my Twitterbot, and revoked the password, that would take it down.

Twitter also offers these nice persistent streams. They have this user stream endpoint, where you send a hanging GET to this request, and it tells you that you have a tweet. And you don’t need to poll. My bot sits there and waits for Twitter to tell it that it has a message. And then it gets a message, and it can say, “are there image urls?” It can download the images and do the conversion.

So my Twitterbot ends up as this nice little pipeline. It’s mostly around manipulating the Twitter API itself. And because Twitter does everything for you, I was surprised how little I needed to do here.

Part 3: the results

So next, let’s go to how I did. Basically, it’s hard for you to know if it did a good job or not. It’s subjective, right? It’s correct when I see the before and after the same. I’m going to walk through a few images and tell you what worked, and what didn’t.

Flamingo painting

Normal vision

Flamingo with protanopic vision

Protanopic vision

So, do these images look different to you? lots of people say “yes” They look almost identical to me. I ran my program; I wrote a script for it, over every image I’ve ever taken. Which was a couple thousand. And I calculated something called the “root mean square deviation.” Which is basically a long way of saying that it penalizes errors very harshly. Any time it finds a difference, it squares it instead of just using it. You sum all those squared differences together. That finds images that have regions that are drastically different. This was the one that came up as the most different. It’s ironic that it’s this image for several reasons. Primarily, I painted it myself. audience laughs. It was a BYOB painting class, and I got very painfully detailed instructions on how to mix the paints and produce the flamingo. That just goes to show that with enough instruction, I can accomplish just about anything.

Conversation with my friend Lindsay, where she tells me my life is sad

Conversation, wherein I discover how happy my life is. E_NO_HAPPINESS

When I was testing it, all of these images looked the same, before-and-after to me. So I needed to constantly ping all of my friends and ask them, “what’s different? what’s different? what’s different?” I was having a conversation with my friend Lindsay about this one, where she goes over all the differences. She says, “there’s a lot of oranges and pinks, and then in the second one everything is dull gray and yellow.” And I was happy, I say “Thanks! They look almost identical to me.” I was happy that they were different for her. And she sends me a frownie face. With the message, “your life is sad.” people in the audience laugh, secretly agreeing with Lindsay No it’s not!

Normal vision

Sushi with protanopic vision

Protanopic vision

Another interesting image people in the image laugh and groan. Alright, this is a good one! I had Sara Gorecki look over my slide deck, and make sure I always had the before and after images, and I didn’t realize that this would get such a strong reaction. She pointed this one out in particular, as being interesting to her. And apparently to everyone. She said that the sushi on the right looks rancid, and she said she would not eat it. audience laughs in agreement with Sara. But then she asked me an interesting question; she asked, “does sushi look appetizing to you?” And the salmon in the middle does. It looks tasty. But the other two do not look that good. I know it’s good.. my friend Tyler made it for me. He’s not trying to poison me. I try new foods, so I eat a lot of sushi. So I know it’s probably good.. it doesn’t smell. A lot of the foods I eat, it made me realize at a mental level than more of a physical or visceral level, that it’s good.

Ping pong table with normal vision

Normal vision

Ping pong table with protanopic vision

Protanopic vision

And there were some images where there were differences. Where it didn’t produce an identical before-and-after for me. And these are very interesting for me. For this one, does the table on the right look brighter than the table on the left? people say yes. Again, I was talking to Sara about this, and she called it out as an image that doesn’t look very different before and after. She was saying I should maybe try to find another image. And that was interesting to me, because I view the one on the left as being maybe 50% darker than the one on the right. This is an image where it looks like it failed to me; where it doesn’t reproduce what I see. My theory of what’s going on is that there’s more red on the one in the left. And you know that I systematically perceive reds as being very dark. So it looks more different to me than it does to her. But I didn’t look up RGB values, so that’s a guess.

This blue error pops up in a few places. Here, the shirt I’m wearing on the right looks way brighter than on the left. And in this one, I can actually see a color shift. My dad has a shirt on, and I can see that the one on the right is greener than the one on the left. I’m not sure what the color difference really is; I can just see that it shifts.

Hannah with normal vision

Normal vision

Hannah with protanopic vision

Protanopic vision

The color shift appears much more strongly in this picture. This is the 5th most-changed image out of any that I’ve ever taken. But in this one, I can actually see that the table shifts from red to green. For this particular shade of red… actually, I should have asked someone first. Is it red? [[audience says “yes”]] Thank you! For this one, I can see a difference, so it didn’t do a good job of reproducing it. But then I was talking to my friend Hannah about this, and she said that everything about the image was different, even down to her skin color. So I think it does a mostly good job of capturing what I see, but it actually fails on a large region of the image.

So that takes us to the end of the talk. I want to briefly summarize what I went over, since I know there was a lot of information. You see color with 3 cones I see color with 2 cones That means I see roughly 110 the number of wavelength you see Because my color vision is so bad, it’s easy to model You can calculate these confusion lines in xyY to calculate what I see the same, to find an estimate of the color I see I wrote a Twitterbot in node.js that works on this. I’m going to keep it up, feel free to send it images The images look pretty much the same, before and after. So, it did a pretty good job.

Q/A

Does anyone have any questions?

To the best of your knowledge, is there a degree of variation between the colors that people see, that don’t have any type of colorblindness?

The answer is yes. The paper from 1944 had a lot of information on this, actually. It said the color of your lens, the color of your aqueas humour (the gel in your eye), your eye pigmentation, there will actually be quite a bit of individual variation. But it’s not as strong as the difference between the different types of colorblindness, and the difference between colorblindness and normal vision.

Do you have your phone on you right now? Can you take a picture of the crowd and send it to the bot? I’d love to see what we look like to you.

QueensJS with normal vision

Normal vision

QueensJS with protanopic vision

Protanopic vision

What did you use in node.js to change the colors of the images?

I wrote my own code to do this. I did not, for the life of me, want to do image encoding and decoding, because that’s horrible. So I used a library called LWIP that some gentle citizen of the Internet put up. By the way, this is all on Github. If you’re curious about how the code is written, I’ll send this slide deck out somehow, and there’s a link at the beginning where you can look at it.

Do you ever have issues with advertisements, where they think they’ve done well contrasting colors, but they get merged together?

I have a lot of problems with advertisements, but color usually isn’t one of them. Every now and then, something will look weird to me. For instance, the new “Late Night with Stephen Colbert” sign actually looks a little strange to me, because it’s red-on-blue. But it’s a dark blue, and the red looks dark red to me, so the sign looks muddled in general.

As a frontend developer, what can I do? What kind of palettes do I have?

The answer is that I have trouble answering that, because I’m colorbind.

Has your algorithm identified any palettes that are better than others? Should we fall back to the earth tones of the 70s?

Having been on the trains in NYC, you shouldn’t do that.

Does ARIA define any color palette that should help?

I’m not sure. But I have spent a lot of time with the Adobe color picker, and that generally does a very good job of bringing up differentiable colors. So if that gives you something like your color scheme, you’re probaby fine.

Do you have a favorite color? I wore a lot of black in high school

What color was the dress Ohh, I should have added that. I could never see white and gold; I only ever saw blue and black. some people in the audience cheer Epilogue! I color-processed that picture and showed it to my friends. And friends who could see one-or-the-other, could still see white and gold in the picture. So the problem was on my end.

The Dress with normal vision

Normal vision

The Dress with protanopic vision

Protanopic vision

So, there’s this guy with an antenna, and he brings an object up to the antenna, and it announces the color in front of it. Have you considered something like this?

So, have I found useful tools for knowing or feeling what color something is? I’ve tried some apps on my phone to do this. When I’m clothes shopping, if I’m not sure if something’s blue or purple, I’ll put my phone up to it. But it’ll always give me these unhelpful labels. It’ll tell me “this color is schooner, or earl tea”. It never gives me the actual color name, so I never know what I’m buying. (Jake: one time, it told me a shirt was schooner, shady lady, tea, concord, and abbey. I uninstalled the app and asked the saleswoman)

Have you ever thought about making a browser extension to change colors to something you can see?

I have considered that. It’s a harder problem. You kind of need to know how to spread the colors out. But I know a lot about this now, so I could probably figure this out. (Jake: when I was answering this question: I thought of basically dividing the xyY plane into regions, and using dynamic programming to figure out how pivoting regions around the confusion point might give me better color differentiation). But I haven’t yet.