OPEN TO WORK

Available for new opportunities! Let's build something amazing together.

Using GitHub Copilot's LLM in your VS Code extension

API GitHub Copilot Visual Studio Code
post

In the July 2024 release of Visual Studio Code, extension developers will be able to use GitHub Copilot’s Language Model API (LLM) to their advantage by using the Language Model API. This new Language Model API (LLM) allows you to retrieve the available LLMs in Visual Studio Code and make requests.

The nice thing about this new API is that you can use it in your extension any way you want. For instance, you can integrate your extension into the GitHub Copilot chat, perform requests from your extension commands, create your own code completion logic, etc.

Waldek Mastykarz gave me the idea to test this out in the Front Matter CMS extension. Front Matter already had an AI integration for title, description, and taxonomy suggestions, but it was only available to the project’s sponsors. The idea is to change this to allow users to use the GitHub Copilot LLM when it’s available in their editor.

Using the Language Model API

In the upcoming version of Front Matter CMS, the GitHub Copilot LLM will be used when creating a new article to suggest a title and allow the user to generate a description and taxonomy based on the title and contents of the article. Extension commands trigger these actions.

In the example, I will show you how to interact with the LLM from within your extension commands.

Retrieving the available LLMs

To retrieve the available LLMs in Visual Studio Code, use the vscode.lm.selectChatModels method. This method returns a promise that resolves to the available LLM’s.

Retrieving the model
// Retrieving all models
const models = await vscode.lm.selectChatModels();
// Retrieving a specific model
const [model] = await vscode.lm.selectChatModels({
vendor: "copilot",
family: "gpt-3.5-turbo"
});

Creating a prompt

Once you know the model is available, we can create a prompt to interact with the LLM. The prompt is created by one or multiple vscode.LanguageModelChatMessage strings.

Creating a prompt
const title = await vscode.window.showInputBox({
placeHolder: "Enter the topic",
prompt: "Enter the topic of the blog post",
});
if (!title) {
return;
}
const messages = [
vscode.LanguageModelChatMessage.User(`For an article created with Front Matter CMS, create me a SEO friendly description that is a maximum of 160 characters long.`),
vscode.LanguageModelChatMessage.User(`Desired format: just a string, e.g., "My first blog post".`),
vscode.LanguageModelChatMessage.User(`The article topic is """${title}"""`),
];

Sending the prompt

Once the prompt is created, you can send it to the LLM using the model.sendRequest method.

Sending the prompt
let chatResponse: vscode.LanguageModelChatResponse | undefined;
try {
chatResponse = await model.sendRequest(
messages,
{},
new vscode.CancellationTokenSource().token
);
} catch (err) {
if (err instanceof vscode.LanguageModelError) {
console.log(err.message, err.code, err.cause);
} else {
throw err;
}
return;
}
let allFragments = [];
for await (const fragment of chatResponse.text) {
allFragments.push(fragment);
}
vscode.window.showInformationMessage(allFragments.join(""));

The result of this code looks like this:

Using the GitHub Copilot LLM in your extension

Conclusion

The Visual Studio Code team made it easy for extension developers to use the GitHub Copilot LLM in their extensions. This LLM API opens up many possibilities for extension developers to enhance their extensions with AI capabilities.

Related articles

Report issues or make changes on GitHub

Found a typo or issue in this article? Visit the GitHub repository to make changes or submit a bug report.

Comments

Elio Struyf

Solutions Architect & Developer Expert

Loading...

Let's build together

Manage content in VS Code

Present from VS Code