Using GitHub Copilot's LLM in your VS Code extension
In the July 2024 release of Visual Studio Code, extension developers will be able to use GitHub Copilot’s Language Model API (LLM) to their advantage by using the Language Model API. This new Language Model API (LLM) allows you to retrieve the available LLMs in Visual Studio Code and make requests.
The nice thing about this new API is that you can use it in your extension any way you want. For instance, you can integrate your extension into the GitHub Copilot chat, perform requests from your extension commands, create your own code completion logic, etc.
Waldek Mastykarz gave me the idea to test this out in the Front Matter CMS extension. Front Matter already had an AI integration for title, description, and taxonomy suggestions, but it was only available to the project’s sponsors. The idea is to change this to allow users to use the GitHub Copilot LLM when it’s available in their editor.
Using the Language Model API
In the upcoming version of Front Matter CMS, the GitHub Copilot LLM will be used when creating a new article to suggest a title and allow the user to generate a description and taxonomy based on the title and contents of the article. Extension commands trigger these actions.
In the example, I will show you how to interact with the LLM from within your extension commands.
Retrieving the available LLMs
To retrieve the available LLMs in Visual Studio Code, use the vscode.lm.selectChatModels
method. This method returns a promise that resolves to the available LLM’s.
// Retrieving all modelsconst models = await vscode.lm.selectChatModels();
// Retrieving a specific modelconst [model] = await vscode.lm.selectChatModels({ vendor: "copilot", family: "gpt-3.5-turbo"});
Creating a prompt
Once you know the model is available, we can create a prompt to interact with the LLM. The prompt is created by one or multiple vscode.LanguageModelChatMessage
strings.
const title = await vscode.window.showInputBox({ placeHolder: "Enter the topic", prompt: "Enter the topic of the blog post",});
if (!title) { return;}
const messages = [ vscode.LanguageModelChatMessage.User(`For an article created with Front Matter CMS, create me a SEO friendly description that is a maximum of 160 characters long.`), vscode.LanguageModelChatMessage.User(`Desired format: just a string, e.g., "My first blog post".`), vscode.LanguageModelChatMessage.User(`The article topic is """${title}"""`),];
Sending the prompt
Once the prompt is created, you can send it to the LLM using the model.sendRequest
method.
let chatResponse: vscode.LanguageModelChatResponse | undefined;
try { chatResponse = await model.sendRequest( messages, {}, new vscode.CancellationTokenSource().token );} catch (err) { if (err instanceof vscode.LanguageModelError) { console.log(err.message, err.code, err.cause); } else { throw err; } return;}
let allFragments = [];for await (const fragment of chatResponse.text) { allFragments.push(fragment);}
vscode.window.showInformationMessage(allFragments.join(""));
The result of this code looks like this:
Conclusion
The Visual Studio Code team made it easy for extension developers to use the GitHub Copilot LLM in their extensions. This LLM API opens up many possibilities for extension developers to enhance their extensions with AI capabilities.
Related articles
Protect keys by keeping those out of your VS Code settings
Learn how to better protect your API/Authentication keys by keeping them out of your Visual Studio Code settings. This article helps developers and users.
Publishing web projects from Visual Studio Code to Azure with Git
Retrieving external / guest users via the Microsoft Graph API
Report issues or make changes on GitHub
Found a typo or issue in this article? Visit the GitHub repository to make changes or submit a bug report.
Comments
Let's build together
Manage content in VS Code
Present from VS Code