Skip to content
GitHub Copilot is now available for free. Learn more

Find edge case errors in your code base

GitHub Copilot offers some surprising benefits for solving particularly stubborn problems.

Artwork: Susan Haejin Lee

Photo of Claudio Wunder
Hubspot logo

Claudio Wunder // Senior Software Engineer, Hubspot

The ReadME Project amplifies the voices of the open source community: the maintainers, developers, and teams whose contributions move the world forward every day.

While GitHub Copilot allows developers to spend more time building innovative software and less time creating boilerplate and repetitive code patterns, its abilities go far beyond efficiency. In this video guide, I outline a recent experience with GitHub Copilot that changed the way I think about finding edge case errors.

I was tracing an elusive error in a library used in nearly every aspect of HubSpot's business. Despite my best efforts and those of my team, we couldn't find the error’s origin. It was a potentially huge problem because the library in question has a direct effect on a core part of Hubspot's offering: the ability to assess user behavior. I was using GitHub Copilot to work on the library, just as I do with my contributions to projects like Node.js and GNOME, when I received a suggestion that just didn't add up. It wasn't that GitHub Copilot was wrong—but rather that this odd bit of code shined a light on the exact bug that was vexing me.


In this video, you will learn:

1.  How HubSpot works to track user behavior

2. How GitHub Copilot can help find, test, and de-bug issues in your code



Video transcript:

Hey there, I’m very excited to have you here. My name is Claudio and I’m a platform engineer at HubSpot. My job is to maintain a sustainable tracking and experimentation platform that serves the hundreds of different engineering teams we have at HubSpot. I’m also a collaborator at the Node.js project, and a member of the GNOME Foundation, where I mainly contribute during my free time. 

Today, I’m going to talk about a developer story of how GitHub Copilot helped me identify a critical bug in our codebase—and how it can do the same for you and save you time and headaches. 

So what is Copilot? You could say it’s an extension that suggests code in real time within your editor. But, for me, GitHub Copilot is an assistant that helps me, day to day, to write better code as it’s powered by OpenAI and millions of lines of code from open source repositories.

Now, let’s talk about the story of when Copilot identified a dormant bug in our code base, and what happened next. 

But before we jump to what Copilot exactly identified, I feel I’m in need of giving a little bit of context about the library it impacted. 

Without further ado, this is Usage Tracker. Usage Tracker is a library responsible for tracking user behavior. It is an in-house library of the likes of Segment or Sentry, if you say, and that’s because it monitors user behavior, identifies their interaction, and then respectively processes and sends the information over the network to a server of sorts. 

Hundreds of teams at HubSpot rely daily on Usage Tracker to drive our data-driven platform and allow our engineering teams, data analysts, and product experts to make high-level decisions that affect our entire business. 

As Matt says, “Usage Tracker is the library responsible for tracking user behavior. It allows us to grow better and provide better experiences to our teams and customers.” And that is true—because teams rely on Usage Tracker to be able to make decisions backed by data. 

But what if something was wrong? To a certain degree, of course, Usage Tracker operates by resolving parameters and data asynchronously, allowing the teams that use it to supply parameters and data in an as needed fashion. This means that the data is resolved when needed, and then merged with pre-existing static information. Like, for example, your screen size. 

Finally, after the data is resolved, validated, and processed, it gets queued and then dispatched over the network after certain circumstances are met. And it works fabulously. But if everything was evergreen, I wouldn’t be here conducting this talk. So I’m here to talk about a bug Copilot found—a very important one—but very well hidden.

So what did Copilot find, exactly? Well, going back to the architecture I explained before, Usage Tracker resolves asynchronous data, including what we call“identifiers”, with our parameters that allow us to identify who is the user, or anonymous user, that is performing an action in our platform? AKA who we are tracking? 

For example, if a user clicks a button, which user clicked a button, and on which page? That’s very important because, without knowing who are the actors of our systems, we cannot reliably create cohorts of data streams, and that remind specific outcomes and decisions that should be done based on what we are tracking. Like, how can we know if a certain feature introduced only for “starter” tier users is actually being adopted, if we don't know which kind of users are actually using that feature? 

Yes, but then what went wrong?

Well, on a regular day, as any other, I was working on an update of a function that is responsible for resolving that set of asynchronous data and ensuring that unhandled rejections get filtered out—meaning the data that failed to resolve should be removed from the final processing pipeline. For example, if the system failed to resolve the current user email address, it should flag that the email address was not resolved, without breaking the execution trend. But if the system was unable to identify the user by any of the available methods, then it should prevent tracking from being done—because we cannot identify the user. 

And here’s where Copilot gets introduced to the story: When I was writing a new IF statement within the function, Copilot made a strange code suggestion. Well, the suggestion being to add a case to alert that the data that failed to resolve, well, failed to resolve, but that doesn't really make sense—at least, where it added that suggestion.

And that’s because our logic slices the data that failed to resolve out from the equation completely, or so we imagined. Diving deeper, after the data gets resolved, we actually merged the non-resolved data, what we call “static parameters,” were to resolve data, and where the data that fail to resolve is sliced out by marking it as a non-defined, AKA undefined. But looking closely, the function that we use to merge data by default overrides the same entries of data from the resolved data if the values of the resolved data are undefined. Meaning: Our system would falsely think that we succeeded in identifying the user, when actually we did not. And that’s because the system failed to detect all of the identifying matters failed. 

Keep in mind that this is an edge case. Because in reality, we usually are able to identify the user successfully, with at least one of our identifying methods (there are many). Thus, very effectively heightens the existence of this bug—because within this sea of data, it would only affect small scenarios where, for example, a network request failed or testing environments. 

But even just being an edge scenario, it also means that if, in any given moment, we changed how any of those synchronous identifying matters work, it would transform this small edge scenario into massive spread, as our system would just keep thinking that it was able to successfully identify the user—when it was not. 

So this is why, at least in this circumstance, Copilot was essential. Not because if any incident happened we wouldn’t be able to identify the bug, but because it subtly gave a suggestion based on how it read our existing code base that would effectively prevent such an incident from happening. 

Remember: Copilot makes suggestions based on the existing code base you’re working with, and it proved that, in our case, it can be a very helpful pair programmer. Or you could call it an assistant, because it consciously keeps giving suggestions and highlighting things that often can be easily missed by a human. Because it just removed a great headache that it would have been to discover this mid-incident, and possibly only after an analyst started to notice weird behaviors and inconsistencies in the data we were tracking—then it would already have done big damage.

And this is the actual impact that Copilot creates, and how it helped me and my team to possibly prevent a disaster. And how it can help you and your team on preventing bugs, or creating better code, or giving you suggestions with things that you wouldn’t have noticed before. Because that’s the magic of Copilot; the small things that create a big impact. 

So what’s next? Well, because of that, we created better processes to ensure that this kind of edge situation wouldn’t happen anymore. Like documenting those sort of lines of code so we understand why this was done before, and also effectively creating more tests for specific edge case scenarios, and add more monitoring tools that can preemptively notice drastic data change sets from normal patterns. 

Also, I’m still advocating for Copilot and hoping that one day we adopt it org-wide. Who knows? Stories like these are what drive change. 

So thank you for having me and allowing me to share this developer story that I had with GitHub Copilot. If you wish, please follow me on GitHub @ovflowd. Lastly, I want to thank my team and HubSpot for being amazing and always supporting me. And of course, thank GitHub for the opportunity. Stay safe, have a good one, and enjoy Universe.

HubSpot is a leading CRM platform that provides software and support to help businesses grow better. Our platform includes marketing, sales, service, and website management products that start free and scale to meet our customers’ needs at any stage of growth. Today, thousands of customers around the world use our powerful and easy-to-use tools and integrations to attract, engage, and delight customers.

About The
ReadME Project

Coding is usually seen as a solitary activity, but it’s actually the world’s largest community effort led by open source maintainers, contributors, and teams. These unsung heroes put in long hours to build software, fix issues, field questions, and manage communities.

The ReadME Project is part of GitHub’s ongoing effort to amplify the voices of the developer community. It’s an evolving space to engage with the community and explore the stories, challenges, technology, and culture that surround the world of open source.

Follow us:

Nominate a developer

Nominate inspiring developers and projects you think we should feature in The ReadME Project.

Support the community

Recognize developers working behind the scenes and help open source projects get the resources they need.

# For Newsletter

Every month we’ll share new articles from The ReadME Project, episodes of The ReadME Podcast, and other great developer content from around the community.

Thank you! for subscribing