First introduced as a technical preview in June 2021, GitHub Copilot quickly emerged as the world’s first at-scale generative AI coding tool when it became generally available in June 2022. Since then, it’s played a critical role in redefining the developer experience and underscoring the impact of developer productivity and satisfaction on business outcomes.
In our latest survey, we found that 92% of U.S.-based developers are already using AI coding tools both in and outside of work—which shows that most companies are already using AI, whether they know it or not. As the creators of the world’s most widely adopted generative AI coding tool, we want to empower other organizations to accelerate their innovation, while ensuring they have the transparency they need to understand and feel confident using Github Copilot. That’s why we’re launching the GitHub Copilot Trust Center.
We often field questions about how GitHub Copilot protects user privacy and if the code that GitHub Copilot suggests is secure. Those questions, and many others regarding security, privacy, compliance, and intellectual property can be easily found and clearly answered on the GitHub Copilot Trust Center. When developers use GitHub Copilot, they can augment their capabilities and tackle large, complex problems in a way they couldn’t before. By following good coding practices and taking advantage of GitHub Copilot’s built-in safeguards, they can feel confident in the code they’re contributing.
AI is here to stay—and it’s already transforming how developers approach their day-to-day work. But just like any disruptive technology throughout history, AI brings important questions around its use and implications.
To understand GitHub Copilot’s capabilities and proactively build policies that enable its use, organizations can reference the Copilot Trust Center to responsibly and effectively equip their developers with the AI pair programmer.
Here are a few frequently asked questions to get you started:
- What personal data is used by GitHub Copilot for Business and how? Copilot for Business collects three kinds of personal data: user engagement data, prompts, and suggestions. User engagement data is information about events that are generated when iterating with a code editor. A prompt is a compilation of IDE code and relevant context (IDE comments and code in open files) that the GitHub Copilot extension sends to the AI model to generate suggestions. A suggestion is one or more lines of proposed code and other output returned to the GitHub Copilot extension after a prompt is received and processed by the GitHub Copilot model.
Copilot for Business uses the source code in your IDE only to generate a suggestion. It also performs several scans to identify and remove certain information within a prompt. Prompts are only transmitted to the AI model to generate suggestions in real-time and are deleted once the suggestions are generated. Copilot for Business also does not use your code to train the Azure OpenAI model. GitHub Copilot for Individual users, however, can opt in and explicitly provide consent for their code to be used as training data. User engagement data is used to improve the performance of the Copilot Service; specifically, it’s used to fine-tune ranking, sort algorithms, and craft prompts.
What’s actually happening when GitHub Copilot responds to a prompt? An important note is that GitHub Copilot’s suggestions are not copied and pasted from any code database. Rather, GitHub Copilot uses probabilistic reasoning to generate suggestions. GitHub Copilot sends a prompt to its AI model, which makes a probabilistic determination of what is likely to come next in your coding sequence and provides suggestions.
How does GitHub Copilot aid secure development? GitHub Copilot leverages a variety of security measures to remove sensitive information in code, block insecure coding patterns, and detect vulnerable patterns in incomplete fragments of code. GitHub also offers solutions to assist with other aspects of security throughout the SDLC, including code scanning, secret scanning, and dependency management.
The GitHub Copilot Trust Center will live in our Resources hub, making it easy for organizations to find answers to a number of common questions regarding GitHub for Enterprise. From there, enterprise teams can find and navigate the GitHub Copilot Trust Center based on the topic that their questions or concerns fall under:
- Security, which explains how GitHub Copilot aids secure development and works together with other security measures to protect your code from vulnerabilities.
Privacy to answer questions about what personal data is collected, how long it’s retained, and how it’s used.
IP and open source, which addresses the safeguards we’ve put in place to mitigate IP and open source concerns (including a filtering mechanism) when using GitHub Copilot.
Accessibility for questions regarding which standards GitHub follows when designing our products.
Labor market, which includes research about how GitHub Copilot is increasing developer productivity and lowering the barrier to entry in software development.
As GitHub’s Chief Legal Officer, I understand the nuanced challenges of enabling company-wide AI adoption, especially in an evolving regulatory landscape. GitHub Copilot is enterprise-ready, but organizations need to clearly understand how the tool meets their compliance, security, and accessibility requirements. Our aim is to bring you that clarity with the GitHub Copilot Trust Center.
Over the past year, it’s been astonishing to see the transformative power of generative AI, and we’re excited to be the vanguard of that innovation. We also embrace the challenge of creating a safe path forward into this new frontier—one that allows companies and organizations of all sizes to responsibly innovate with generative AI.