Skip to content
GitHub Copilot is now available for free. Learn more

THE README PODCAST // EPISODE 32

(De)coding conventions

The evolution of TypeScript and the future of coding conventions, AI’s role in improving accessibility, and practical advice on encouraging non-code contributions.

Elapsed time: 00:00 / Total time: 00:00
Subscribe:
The ReadME Project

The ReadME Project amplifies the voices of the open source community: the maintainers, developers, and teams whose contributions move the world forward every day.

The ReadME Project // @GitHub

Programming languages are always in flux, and so is the way we use them. In this episode, we dive into the rise of TypeScript, with The ReadME Project’s Senior Editor Mike Melanson outlining its history and evolution. Hosts Neha Batra and Martin Woodward discuss the pros and cons of static typing, and Jordan Harband from TC39 shares his views on the benefits and limitations of TypeScript. We also hear from Aaron Gustafson on AI’s potential to enhance accessibility and the projects leading the charge. And Kingsley Mkpandiok from the CHAOSS Project responds to an #AskRMP submission with tips on encouraging non-code contributions within open source projects.

Here’s what’s in store for this episode:

  • 00:00 - The hosts discuss the challenges of establishing web standards in open source communities when new technologies emerge. 

  • 02:38 - First Commit: The transformation of the world of stock trading from a chaotic, bustling floor to an automated and computer-driven environment. Our hosts highlight the role of open source, particularly Linux, in powering high-speed trading systems and enabling advancements in performance and speed.

  • 05:40 - Feature release: The ReadME Project’s Senior Editor, Mike Melanson, discusses the rise of TypeScript and the pros and cons of statically typed languages.

  • 18:24 - The interview: Aaron Gustafson joins the hosts to discuss the role of AI in improving accessibility. He highlights initiatives such as Microsoft's AI for Accessibility Grant Program, which invests in research and startups to drive innovation in accessibility.

  • 36:10 - AskRMP: Kingsley Mkpandiok answers a listener’s question on encouraging non-code contributions. The key? Communicate that everyone’s skills matter.

Looking for more stories and advice from the open source community? To learn more from the authors and experts featured on this episode, check out:

Special thanks to Jordan Harband for sharing his insights on TypeScript, Aaron Gustafson for outlining the role that AI will play in accelerating accessibility, and Kingsley Mkpandiok for answering a listener question about encouraging non-code contributions.  

Check-out The ReadME Project for more episodes as well as featured articles, developer stories, helpful guides, and much more! Send your feedback, questions, and ideas to thereadmeproject@github.com.


Martin: I say, "GNU."

Neha: The animal is gnu, but we also just... I think it's G-NU because we've decided to go with G-NU. It's like the Jif versus GIF thing, so yeah. Okay, cool.

Martin: Okay, cool. I'm getting an official pronunciation. Give me one second.

Neha: Yeah.

Martin: GNU, it is GNU.

Neha: I like to imagine it's coming to your brain. Okay, cool. GNU.

Martin: GNU is the official pronunciation. 

This is The ReadME Podcast, a show dedicated to the topics, trends, stories, and culture in and around the developer community on GitHub. I'm Martin Woodward from the GitHub Developer Relations team.

Neha: And I'm Neha Batra from GitHub's Core Productivity team. And Martin, today's words of the day are standardization and automation.

Martin: Awesome. Well, I can spell one of those two words, so this is exciting.

Neha: I didn't expect you to say that.

Martin: That's okay.

Neha: I think that goes for both of us.

Martin: Yeah.

Neha: Regardless, we're going to be taking a closer look at web standards and conventions, and how we come to a consensus around those when a new technology emerges, especially in open source where there are no limits to who your community is, which adds a special layer of complexity around making decisions. And one of the things that we're going to be talking today about is TypeScript, which, Martin, I know you have some history with.

Martin: Well, yeah, it was a super small part. The reason Microsoft have an account on GitHub, which I created, was actually for the TypeScript project to open source. And I've got this vivid memory of Anders Hejlsberg and Amanda Silver coming to me and saying, "Hey, we're going to release this new JavaScripty thing. It's going to be amazing. The world's going to adopt it." And I remember coming in, "Yeah, right. Okay, good luck with that."

Well, just goes to show, you should never bet against Anders Hejlsberg or Amanda Silver, that's for sure.

Neha: I can't believe you created the Microsoft GitHub account. It's like a flex, which I think is totally valid, and I think what's really interesting about that is that things change over time. Now, TypeScript has become pretty ubiquitous and it's the fourth-most popular language in GitHub, and there's still some limitations. We're going to be talking to GitHub's, Mike Melanson, about the rise of static type checkers, the pros and cons, and how the standards could change soon.

Martin: Yeah, and we'll also be diving back into how technology is changing, especially for developers with disabilities. This time, AI is playing a role. We'll talk to Aaron Gustafson from Microsoft about all that.

Neha: Plus, as always, we'll hear about what is going on at The ReadME Project and get some advice in #AskRMP, but first, first commit.

Martin: Picture, if you will, with me, Neha, the world of stock traders. Now, what that probably conjures up is the view of New York Stock Exchange. People crowded, screaming at each other, wearing blazers, and maybe it's like the future's exchange with Eddie Murphy in Trading Places or something like that.

Neha: Honestly, in today's day and age, I absolutely cannot imagine it because it sounds nothing but stressful for me. But luckily in actuality, today it sounds a lot more like this.

But that makes sense, because over time, the world of financial trading has increasingly left their pocket calculators behind and begun relying more and more on computers and other technology. So as the algorithms advanced, the trading floor got quieter.

Martin: Yeah, and actually today, the world of trading is so automated and fast because of the use of powerful computers. These arbitrage opportunities are found and executed in the milliseconds and fractions of milliseconds between trades. So you've guessed it; this requires some pretty powerful computers to take advantage of it.

Neha: And in the early 2000s, milliseconds really started to matter. Michael Lewis wrote about this in his book, “Flash Boys,” that organizations were increasingly getting the upper hand in this battle of time with better software and even better physical infrastructure that was slightly closer to the Stock Exchange itself. And we're talking a few feet of cable making the difference.

Martin: And the ability of these firms to conduct superfast trades, they owe it, at least in part, to open source. In that same time period, Linux was being quickly adopted by many in the financial sector to power their high-speed trading systems. Because of the system's ability to send messages really quickly, some have even argued that high-frequency trading wouldn't exist without Linux and without open source.

Neha: Even today, the New York Stock Exchange, which is the biggest stock market in the world, is run on GNU Linux, and Linux was also beneficial because it allowed these companies to continually improve performance and speed, giving them the upper hand and making the billions.

Martin: Of course, there are some downsides to the dependence on fast-moving automated systems like this. We've seen a few times where a small error can actually cause a whole market to dip. Back in 2012, the Knight Capital Group lost around $440 million during a mass sell-off of stocks. That happened by accident because of a proprietary software glitch that got pushed to production.

Neha: And while this kind of high-frequency trading has become the standard in markets, people continue to innovate and the technology continues to develop. Today, algorithmic trading accounts for around two-thirds of US equity trading, for example. And with the growth of AI, the speed and power of some of this kind of financial trading will only get bigger.

Martin: So Neha, I mentioned it in the intro, but I've been around TypeScript for a little while now, and it's become this really important tool that we all depend on, not just for the systems that we write here at GitHub, but also for a lot of the systems that we're using on the internet today.

Neha: Yeah, I feel like TypeScript is something we use a lot at GitHub, and I actually started playing around with it as a developer before I joined GitHub and I really liked it because it brought some order into how we were working and making sure that we were speaking about the same things in the same type of way, especially because we weren't all working as closely together. It allowed us to hand things off to each other in a very effective way.

Martin: Yeah. It really excels when you're working in a large team. Some of those static errors kind of show up a lot more when you’re collaborating with interfaces somebody else wrote.

We're going to dive a bit more into how TypeScript became so popular and where it goes next. Mike Melanson is joining us. He's the senior editor of The ReadME Project and he's back with us now. Hey, Mike.

Mike Melanson: Hey, how's it going, Martin?

Martin: It's great to have you here. A lot of people have kind of used TypeScript now. In the last Octoverse Report, it's like the fourth-most popular language on GitHub now, but that's come a long way in a very short period of time. Do you want to give a quick history about some of the background and where this came from?

Mike: There's been some swings in paradigms in programming languages over the years, and if you go back to the very beginning, you have Fortran and COBOL, some of the very first languages, and they came out and they were statically typed, and that meant that those languages checked the types, which is Boolean string integer. It checked that the types matched the operations that are being run on them during compiled time. That was the dominant thing for a long time because it helped companies find certain types of errors and do certain things.

And then the web came about in the '90s and we had JavaScript show up. We had Python, PHP, Perl, Ruby, all these different languages, all dynamic, where the checking happened at runtime. Also, the time of Agile happened. And so like Facebook said at the time, "Move fast and break things," these were the languages where you could move fast and you might break things, but you were moving fast at least, right?

Neha: For sure, yeah.

Mike: Yeah. But then companies like Google tried to make maps and docs in JavaScript, and Microsoft realized early on that they were going to have to bring the Microsoft Office suite to the web, and when they realized that and that they would have to use JavaScript to do it because that was the language of the web by then, they said basically, "No way. We're going to have to find a better way to do this," and they built TypeScript.

Martin: I think that's always amazing to me, the computer science that happens to solve internal engineering problems at companies. It's a lot of the reason why we have things like codespaces and some of the things on GitHub is to solve our own problems.

Neha, you were actually using this back in the day. Did you find that it was helping you resolve a whole set of issues when you were coding in TypeScript versus just raw JavaScript?

Neha: I did. So to put myself out there a bit, when I was working at Pivotal and we were working with different companies and trying to basically make sure that on day two and three we were picking the right languages and frameworks that would work with these teams, we were pairing together, but we would just completely merge to main. We wouldn't actually have to do a code review or any of that formal processes.

So when it comes to making sure that we are all working in a similar manner and we're able to understand each other's work, as you're picking up someone else's work that just got merged in maybe an hour before, either you had to have really strong similarities in coding styles, or with TypeScript and other things, you could easily place it into the languages of preference and the languages of familiarity, but you could scale a lot more because now all of a sudden, we can see what the types are, we can check our work a little bit better, and we're not spending as much time debugging and QAing the system because it's right from the beginning.

And I think there's something really interesting here. As Mike said, you have to move fast and break things. We wanted progress and then we wanted scale. And if you want scale, you need some constraints and order to make that scale happen. It was the biggest criticism of JavaScript was not having types and not being able to scale beyond a certain level and needing to adapt something else, but now you didn't have to.

Mike: And one of the things about TypeScript is that it helped do that, but without going all the way back to the Java and the C++. You didn't have to write all that boilerplate code and the big interfaces and think all just in abstractions. You could still have that sort of self-documenting feature that you're essentially talking about and all those other things, but not go back to what some people at that time saw as the dark ages.

Martin: Well, I'd say some people were quite skeptical about this at the time. There was a reason why we created dynamically typed languages, or dynamic languages, to be fast. So everybody can't be a fan of doing it this way, are they?

Mike: No, not necessarily. I talked to Jordan Harband about this, and he's a member of TC39, which is the committee that determines ECMAScript, the standard behind JavaScript. And he definitely acknowledges the benefits of type systems, but he also says that types can essentially become a crutch that you rely on in place of doing proper testing on your code.

Martin: Yeah, just because you are type-safe doesn't mean you are safe from bugs. I have plenty of bugs in my TypeScript code.

Mike: Yeah, for sure. Types help you find them during development. You have a tighter feedback loop often, but things still slip past. Here are some of the downsides Jordan told me about. 

Jordan Harband: The cons of a type system, I think, are less objective and less broadly understood or agreed upon. TypeScript is not a superset of JavaScript. It does not have the capability to fully represent JavaScript semantics. You can have programs that type check that still throw type errors, and you can have programs that TypeScript complains about that do not in fact throw type errors. These are edge cases, they're rare, but it happens.

Normally when presented with these sorts of caveats, the response is, "Well, it's good enough for my use case," and that's a valid choice one can make, but it's still a con to consider. You're adding complexity to your code base, you're adding more requirements on your developers to understand the code, to maintain it. And when the errors are confusing or incorrect, it can take a lot of extra time and more importantly, mental energy and focus to stop what you're doing, figure out what the problem is, and then keep moving.

Another big con though is that a lot of people are under the impression that having these static types means that you're safe. They use the phrase "type-safe," but you're not actually safe at all. TypeScript provides zero runtime guarantees. All it's doing is kind of giving you hints. I think it's really important that folks understand that there's basically no bugs you'll catch with TypeScript that you couldn't have caught with tests and shouldn't have caught with tests.

Neha: I think that some of those points are really fair. I think that whenever you try to solve for some areas, you're making compromises in others.

When it comes to how the general community has responded, are there any other criticisms that are prevalent and have you seen the TypeScript community respond in any way?

Mike: Yeah. I mean, I would say overall, people love TypeScript, right? It tops the charts on various surveys, but at the same time, there's one part that nobody really likes and that's the build step. Gil Tayar was the original author of the Type Annotations Proposal for JavaScript, and that proposal is to bring essentially, types as comments to JavaScript, with the idea that you wouldn't have that transpilation step that TypeScript has. It's definitely all about the build step.

Recently, a high-profile thing happened. Svelte framework creator Rich Harris announced on Twitter that he was foregoing TypeScript to use JSDoc instead. And I mean, there's another instance of pros and cons. You can skip the transpilation step there, but the con is that you have this sort of disjointed separate experience now where your types are held in this other doc or this other... So there's always pros and cons.

The Type Annotations Proposal looks to get rid of that, and it really tries to find a way to make JavaScript itself more of a gradually typed language rather than having TypeScript on top. And it also actually wants to make it so that you can use various type systems on top of JavaScript. It's supposed to be just sort of an in-between layer where you could choose your type system to be TypeScript or you could choose it to be Flow from Facebook or you could choose Hegel, I think was another one.

Martin: And that's definitely what we see, isn't it? We're seeing not just an increasing move to kind of typing in JavaScript, but also the flexibility of dynamic languages with the ability to do typing when you need it in other languages as well.

Mike: Yeah. A thing that's been happening ever since TypeScript showed up in 2012 has been this move towards a gradual type system. I think the phrase first came out in 2006 in an academic research paper, but gradual typing basically is finding the best of both worlds, and that's really the story of TypeScript. 

It came out first, but it was really just sort of the flame that emerged from the embers below. 2014, we saw Flow from Facebook. Shortly after, Python, PHP. They both added type hints and type annotations. Ruby recently added its own native type checker, but it's had Sorbet for years now. Even Elixir, which comes straight from Erlang, just had a thing on Hacker News the other day where they are looking at an experimental type system. So the move has been to go towards this gradual typing system, but even then, you're still not going to get everybody that's going to agree. 

Neha: I think that's what the beauty of these things are is that as these languages have evolved, before we had them to be super opinionated, and over time, we've been able to use software to let people pick what they want to optimize for. And that's where I think we want to be is that some people are optimizing for enabling people to go as quickly as possible, other people are optimizing for scale, some people are optimizing for the build step or the compile step, and now there are tools for every single, or not for every single situation, but as we discover more situations and more people who need them, the software can meet their needs. And I think that that's a beautiful part of the evolution, especially on the front end.

Mike: Yeah. I talked to Amanda Silver who was involved with TypeScript from early on, and she talked about the role that AI is likely going to play in all of this. Types can be hard to understand. That's one of the things people point to about them. You get a type error and you're like, "I really don't know what that means," especially if you're not an experienced programmer, it can be confusing. And she said that the role of AI in the near future could be smoothing out that experience where not only gradually type systems, but various languages are going to be able to benefit from those static typing benefits without the detractors.

Martin: Yeah, and that's fascinating because Amanda's in charge of a bunch of different languages, including C# and things, and both C# as well as Java, these very strongly typed languages, have actually been adding features to allow them to be more dynamic. So it's amazing how the dynamic languages are getting more type-safe and then the type-safe languages are getting more dynamic and we're kind of meeting in the middle and allowing people, just like Neha said, people to pick the right tool for the job at the right time.

Mike: Yeah, precisely. It really seems like we're realizing we want the best of both worlds and the languages have been following that towards the middle a little bit.

Neha: Well, I think that's a great ending though, actually, seeing how these languages have evolved, meeting in the middle, and making sure that people have the tools that they need to optimize for what they need.

Mike Melanson, thank you so much.

Mike: Thanks as always.

Martin: Oh, hey Mike, before you run, let us know what else is going on at the ReadME Project.

Mike: This month, we have Kyler Middleton sharing how she went from farm life to a career in DevOps and outlining the overlooked value of knowledge sharing in tech. Also, Ruth Ikegah is back with a guide that simplifies making your first open source contribution. And Tramale Turner’s guide helps you grasp ‘adaptive leadership’, where your leadership style is responsive to the needs of your teams and organization. You can find all this and more on github.com/readme.

Neha: So Martin, we can't really escape the conversation around AI right now. There are a lot of worries about what it's going to mean for our jobs and for society, but there are obvious benefits too. We're going to check back on a topic we covered a few months back, which is accessibility, because it's one of the places that AI is poised to make a huge difference. It's going to help create new systems, standards, and ways of working that not only benefit those who need accessible services, but also the rest of the open source community.

And to talk about that today, we're joined by Aaron Gustafson, Principal Accessibility Innovation Strategist at Microsoft. Hey, Aaron.

Aaron Gustafson: Hey, how's it going?

Martin: It's great to have you here. I think that's a fancy title you've got. So I think it'd be helpful to start by explaining what your job actually entails. What brought you into this role?

Aaron: Gosh, that's a tough one because I've sort of had a long and winding journey in terms of technologies, but I recently joined this team, I guess recently to me, about a year ago, a little over a year. And I came into this role from a history in accessibility in the web space. And so a lot of my work has been in the space of accessibility, progressive enhancement, that sort of work, trying to ensure that our web products can reach as many people as possible.

And then this opportunity came up on the Accessibility Innovation team at Microsoft and they were looking for somebody to steer the direction of the AI for Accessibility Grant Program, which is a grant program that's just completed its fifth year, where we've been doing a lot of targeted investments in research, in startups, and in organizations that are using AI and doing other things to really accelerate accessibility innovation across the globe.

Neha: I love that. There was a bit of a tone of surprise when I was introducing you about this connection between AI and accessibility. I just don't think it's a natural connection that people might be thinking about. So I'm curious for you, how do AI and accessibility connect?

Aaron: Yeah, so I would say there's a lot of disparate projects that are happening in this space. The big projects like GitHub Copilot and ChatGPT and stuff like that kind of capture a lot of the public's attention, but there are lots of really interesting things that are going on and have been going on for a while that are using machine learning in different ways to address real needs.

So an example from a past grantee from the AI for Accessibility Grant Program is Mentra, and they are a job placement platform and sort of ecosystem for people who are neurodiverse and their whole idea is to try and make the recruitment process better on both sides for both people who are neurodiverse and the people who want to hire neurodiverse employees for all the benefits that they bring to a place.

And what Mentra does is they actually use their AI technology to match job roles that are open to those folks and then they promote the various job seekers to the potential employers and say, "Here is a group of people that meet your requirements. They align with what it is that you can offer in terms of accommodations, in terms of work environment, that sort of stuff. And here's how much of a match they are," and then it's on the employer actually to reach out to those individuals in order to take the first step. And so it puts less of an emotional and mental overhead on the job seekers to find these employers, and you know you're getting an employer that actually understands and embraces neurodiversity among their workforce.

That's one example. There are other really interesting projects that are out there, like the Speech Accessibility Project, which is actually a consortium of different tech companies as well as the University of Illinois that is putting together a collection of diverse voices in order to be able to create voice assistants that are better able to work with people with atypical speech. So they're actually going out and recruiting people with Parkinson's, that's the current group that they're working on recruiting, and working on people who have different ideologies that they're looking to build a better data set around, which is really cool stuff.

I just saw a post from my friend Sharon Steed on LinkedIn, I think yesterday or the day before, where she was talking about not being able to use Siri because she has a stutter and that's what we want to address. And there are lots of places where we work with speech-to-text or we work with image recognition, we work with all of these different areas that we want to make sure that there is a representative dataset under the hood so that anyone can interact with a voice assistant or those sorts of things and we'll be able to empower more people through those features.

Martin: Yeah. We had a talk a few weeks ago with Becky Tyler, who uses text-to-speech as a way of communicating with the world, and she's a young teenage developer in Scotland. And for me, it was sort of noticeable that she'd chosen a voice that didn't have a Scottish accent because it was a voice that was young, and so being able to get these accents into people, being able to have globally diverse different ways of communicating is important.

And you mentioned in our introduction that actually making these developments in AI accessibility more globally available has been an important part of your work. Can you talk a bit more about that? What does that mean? Is it reducing affordability or is it increasing applicability to people who are outside of the US? What is it?

Aaron: I think there's a bunch of different things that kind of come together in that. I think first of all, yes, availability. And when we talk about availability globally, we're talking about cost, we're talking about necessary processing speed, like what is the AI running on in terms of a device? If it's running on your local device, can that be supported by devices that aren't the latest and greatest shiny devices that people in the tech industry have in our pockets? How can we ensure that as many people as possible are able to access these various technologies?

So another example in this vein is a project called IWill, which is looking to address the needs of people seeking mental health treatment in India. I'm not going to remember the exact stat, but I think it's something like there are 0.4 mental health professionals per 100,000 people in India, something like that.

Neha: Oh my God.

Aaron: Certainly not anywhere near the scale that is needed. And so another one of the projects that we funded was this IWill project where they are training a cognitive behavioral therapy, or CBT, chatbot on actual cognitive behavioral therapy sessions done in Hindi. And they're training it end-to-end in Hindi so that they're able to roll this out for a Hindi-speaking audience in order to be able to provide that kind of first tier of mental health support for people who need it. And they have partnerships then on the ground with resource centers throughout India where people who don't have a device, who don't have network connectivity at home and stuff like that, can actually come and have a private session using this tool in order to be able to have the mental health support that is just not available.

And so looking for projects like that that can do really amazing things is really what gets me excited about coming into the office, well, virtual office, every day.

Neha: I think this is a really interesting example because the beauty of technology is that you can see a problem and you can try to solve it with technology instead of having to have people or other types of funding to scale to solve that problem. And, at the same time, I'm sure I'm not the only one that initially when I hear a project like IWill, I get a little worried too, and I'm like, "Okay, we're applying AI to this kind of thing."

Aaron: For sure.

Neha: So I think there's some fear that's associated with AI, and I feel like this example really capitalizes that where there's the huge benefits and the fears at the same time. Do you think those fears are valid and is it possible that AI could actually create more challenges around accessibility?

Aaron: Absolutely. I don't wear rose-colored glasses when it comes to AI. I do see lots of potential opportunity for harms. And so a lot of what our team is trying to look at is what are the potential harms out there? How can we mitigate them?

In the case of the IWill Project, it was really important to us that here was a project that was looking at cognitive behavioral therapy and actually addressing it in a Hindi-first way, and they're not doing a leap to translation to English where there's an existing data set of CBT therapy chatbot-type stuff, and then having to bridge back to Hindi where not only are you in introducing fragility in that translation piece in a really sensitive area of mental health, but there's also the potential for an imposition of Western perspective into a CBT experience.

Neha: 100%.

Aaron: But yeah, I think back to your question about the potential for harm, there is absolutely the potential for harm in all of the stuff, and a lot of it honestly comes down to the training data for the models themselves. If you're working with a foundation model and then combining that with grounding data, what's happening in the grounding data as well, anywhere there is bias in that data is going to work its way through, unless you have a lot of gates along the way to keep that and filters and stuff like that to keep that from happening.

So if the data that it's trained on is using ableist language, then chances are the model is going to have an ableist perspective, or other problematic perspectives. It may not be distinctly casting people with disabilities or with particular disabilities in a bad light or saying that they're not capable of something or something like that, it might go to the other area of like, "Oh, you're so inspirational," and that sort of stuff, which is equally problematic, but not in quite the same way.

So we need to be cognizant of what are the potential harms. We need to be red teaming these systems and seeing are we getting problematic responses from generative AI, certainly, in terms of whether that's text generation through large language models or whether that's images that are being created that are potentially harmful in terms of their representation. And the same goes beyond accessibility to representation and diversity overall, right?

Neha: Yeah.

Martin: No, no, I think the classic example there was when we started introducing machine learning and data analysis in our watches and things to make us more healthy. We then have a watch that somebody in a wheelchair puts on and it tells them to stand up every five minutes when it was initially launched and things like this that we risk amplifying our own biases because of those training sets.

Aaron: Absolutely.

Martin: The classic example is when I look at some of these AI portrait studio picture things, I always have this wacky American-style smile that I'm not used to because I'm European and I have bad dentistry.

Aaron, we talked a lot when we talked about accessibility in previous episodes around increasing accessibility for the web, increasing accessibility in the tools that we use. It doesn't just make the products better for people with disabilities; it makes the product better for people like us who are temporarily not disabled. And does AI tools as well, is that also going to make for a better ecosystem for everybody, do you think?

Aaron: Absolutely. I view it in a very similar way. In the accessibility community, we often use metaphors like the “curb cut” and stuff like that to help people to understand that a ramp transitioning you from the raised sidewalk to the street level to be able to cross the street is useful for somebody who is navigating that space in a wheelchair, but it's also useful to people pushing a baby stroller or the delivery driver with their cart of boxes and stuff like that. And in the same way, when we build more robust systems that are able to be used by a broader selection of people, that is going to automatically create more opportunities for people who are temporarily not disabled, as you said.

Martin: We were using Copilot a lot when we create our own documentation internally, and actually, what we've found is it's helping us in creating alt tags for images and things. We find it suggesting the alt tag for us and the alt text, and then it's sort of prompting people and reminding people to actually complete those things. So I'm already starting to see some benefits there, but we also need to add those biases into the system as we're building things to gently prompt things along and make sure we are encouraging those behaviors.

Aaron: Yeah. The image description stuff is interesting. Sometimes the models are pretty dead on in what they're providing for that, but often they don't take into account the context that the image is in. So you might be having a piece that is not intentionally about octopuses, but you've got a photo of an octopus in there because it's a metaphor for something that you're talking about, but the alt text that it'll prompt you with is, "An octopus on the sea floor" or something like that when you might want that to actually be like, "An octopus is a representation of X, Y or Z."

And I almost feel like in some ways, bad alt text generation in those contexts is sort of a needling of you to be like, "Oh, that's really bad. I need to replace that alt text with something that actually makes sense and is going to be more usable." And as a forcing function, yes, the ultimate alt text may not be great if you're just taking the image description that the AI is suggesting, but if it prompts you to then actually go and tune that, that's a good thing. That's a good forcing function.

Neha: Yeah. I think that's what I was also thinking about when I was asking you earlier, I was like, "Oh, what about the fear and what about these potential downsides?" The flip side of that coin is that it can help us aspire to be better. And I do think that we started asking this a little bit earlier and in previous episodes, are our jobs at risk? It really depends on how you embrace AI. If you use it as someone who is assistive or a Copilot or someone who nudges you and inspires you to be better, and if we design technology to make those suggestions, you could either A, take it as like, "Hey, this is the alt text and I guess that's why the AI's not good enough," or you're saying, "Hey, I was not going to think about putting alt text in. I'm really glad it reminded me now instead of having to go through code review and finding it out later." So it really depends on how we embrace it and incorporate it.

Aaron: Totally. I mean, I think a good sort of related version to that is could GitHub Copilot replace a developer? Possibly, if that developer is literally just going to Stack Overflow and pulling solutions, whatever the first solution is that they find and drop it in there without any thought, yes, that developer is probably replaceable by AI, but most developers aren't that, right? We look to things for inspiration and we massage it to be whatever it is that we need in the context that we're doing that.

I saw someone describe an LLM as a great improv partner and having that sort of relationship with a Copilot, with an assistant that you could do that, it's sort of pair programming in a way, but being able to do it on your own. And so I do think there's a lot of potential in that for doing those sorts of things.

Martin: Yeah. And I think that's what makes me most excited is people from the outside underestimate the amount of creativity that goes into our roles, but actually, it's hugely creative to be an engineer, to be a developer, to solve problems. And if we can use AI to increase accessibility to people who currently can't use keyboards, who struggle with that, then if you just think about macro economically, the amount of creativity that we are now bringing into the workforce to enable it, it just goes to show why we are doing this and some of the potentials for it.

Aaron: Yeah. And in some ways, I feel like the automation that becomes possible through AI is sort of like what a lot of developers who've built their own macros and things like that, things that have automated away mundane tasks for them in order to streamline their day, that's effectively taking that sort of a concept, but making it available to anyone that they can basically try and automate away the mundane tasks in order to be able to focus on the things that actually require a lot of focus, attention, care, intentionality, et cetera.

Neha: Aaron Gustafson, Principal Accessibility Innovation Strategist at Microsoft, thank you so much for talking with us. It was fascinating to talk through this. And if you want to hear more from Aaron, you can check out his developer story at github.com/ReadME.

Martin: And now for #AskRMP, the place in the show where we grab a listener question from you and get an expert to give us fair advice.

This month, Imani from Rwanda asks, "How do I encourage people to add non-code contributions in particular to my project?" And to answer that, we have Kingsley Mkpandiok, a user experience and brand identity designer and the design maintainer at the CHAOSS Project based in Nigeria.

Kingsley Mkpandiok: We talk about documentation, design, community management, and could be language translation. So these are areas where non-code contributors can actually make open source contribution to their probably favorite project.

So as the design maintainer in the community where I contribute to, I've personally onboarded lot of designers to contribute to the project. And so talking about UX design, of course, we also have brand design, fixing up promotional designs for the brand and all that. These are still under non-code contribution.

Now for documentation, we didn't really have a more detailed documentation to guide newbies just joining the community to help understand what we are doing better. So personally, what I did was to open a document, and a Google Doc to be specific, and I shared my own idea on how I understood the project and also made the document accessible to other old contributors. And most of them also started sharing their own insight to different aspects of the project they understood better.

So eventually, we now have this document where newbies joining the project can go through and have a preferred understanding of what the project is about and make the whole onboarding experience smooth. So for me, I think this is actually one of the ways I actually got people involved in documentation and also talking about design. I know lots of persons also started contributing to a style guide, a design system at CHAOSS Project. These are just some of the ways I've been able to also influence other non-code contributors in my communities to start making these contributions.

Most persons feel like I don't really have this super technical skill to start making contributions, or some persons feel their contributions are little, kind of won't be noticeable because I'm not changing codes, I'm not doing that. So I think it's also important to also understand that when we use the word inclusivity in the open source ecosystem, it's cut across any tech skill that can actually make a project better. So irrespective of what you feel, whatever skill you think you have and you feel like, "Oh, this is not really relevant," I think it's really, really important.

And particularly, for someone like me that loves advocating for designers in the open source ecosystem, it's also important for a lot of designers to understand that lot of open source projects out there, the experience are not really like excellence, and we need more and more designers in the open source ecosystem to improve on the experiences of this product, which will actually encourage more persons to start using open source product.

So these are some of my tags on how non-code contributors should really get engaged and not really bother so much about, "Oh, my experience or my skill is too small or not a super technical skill and won't really count." Your skill counts.

Neha: Do you have a burning question about open source, software development, or GitHub? Share it on social using the hashtag #AskRMP, that's A-S-K-R-M-P, and it may be answered in our next episode.

That's it for this month's episode of The ReadME Podcast. Thanks so much to this month's guests, Mike Melanson, Aaron Gustafson, Jordan Harband, and Kingsley Mkpandiok. And thanks to you for listening. Join us each month for a new episode, and if you're a fan of the show, you can find more episodes wherever you get your podcasts. Make sure to subscribe, rate and review, or drop us a note at the readmeproject@github.com. You can also learn more about what we do at GitHub by heading to github.com/readme.

CREDITS: GitHub's The ReadME Podcast is hosted by Neha Batra and Martin Woodward. Stories for the episode were reported by Senior Editors, Klint Finley and Mike Melanson. Audio production and editing by Reasonable Volume. Original theme music composed by Xander Singh. Executive producers for The ReadME Project and The ReadME Podcast are Robb Mapp, Melissa Biser, and Virginia Bryant. Our staff includes Stephanie Moorhead, Kevin Sundstrom, and Grace Beatty.

Please visit github.com/readme for more community-driven articles and stories. Join us again next month. Let's build from here.

Neha: And one of the things that we're going to be talking today about is TypeScript, which Martin, I know you have some history with.

Martin: Why, yes I do. Flex, flex.

Neha: Flex, flex.

Martin: It ran on my Amex for ages, the Microsoft account. Flex ages, the Microsoft account. It was amazing.

Neha: Yeah, see, that's the stuff that you're like, "Oh yeah, I created this." And they're like, "Wow, you're so cool." And you're like, "I paid a lot for that, but I forgot to expense."

Martin: Yeah, yeah, yeah.

Meet the hosts

Neha Batra

Growing up in South Florida, Neha Batra has always loved building things. She dug into robotics in high school and earned a mechanical engineering degree, then jumped into a role as an energy consultant—but wanted a faster loop between ideation and rolling out new creations. Accordingly, she taught herself to program (through free online courses and through Recurse Center) and worked as a software engineer at several companies, including Pivotal Labs and Rent the Runway. She was also volunteered to make the world of open source more inclusive for marginalized genders on the board of Write/Speak/Code. Neha now lives in San Francisco, where she’s a Senior Engineering Director at GitHub designing products to improve the world of OSS. She’s also a foodie who’s into planning trips, and collecting national park magnets.

Martin Woodward

As the Vice President of Developer Relations at GitHub, Martin helps developers and open source communities create delightful things. He originally came from the Java world but after his small five person start-up was acquired by Microsoft in 2009 he helped build Microsoft’s tooling for DevOps teams, and advised numerous engineering groups across the business on modernising their engineering practices as well as learn how to work as a part of the open source community. He was the original creator of the Microsoft org on GitHub and helped set up the .NET Foundation, bringing in other companies like Amazon, Google, Samsung and RedHat to help drive the future direction of the open source platform. Martin joins the podcast from a field in the middle of rural Northern Ireland and is never happier then when he’s out walking, kayaking or sitting with a soldering iron in hand working on some overly complicated electronic based solution to a problem his family didn’t even knew they had.

More stories

About The
ReadME Project

Coding is usually seen as a solitary activity, but it’s actually the world’s largest community effort led by open source maintainers, contributors, and teams. These unsung heroes put in long hours to build software, fix issues, field questions, and manage communities.

The ReadME Project is part of GitHub’s ongoing effort to amplify the voices of the developer community. It’s an evolving space to engage with the community and explore the stories, challenges, technology, and culture that surround the world of open source.

Follow us:

Nominate a developer

Nominate inspiring developers and projects you think we should feature in The ReadME Project.

Support the community

Recognize developers working behind the scenes and help open source projects get the resources they need.

# For Newsletter

Every month we’ll share new articles from The ReadME Project, episodes of The ReadME Podcast, and other great developer content from around the community.

Thank you! for subscribing