|
1 | | -# Custom Suggestion Service for Copilot for Xcode |
| 1 | +# Custom Suggestion Service for Copilot for Xcode |
2 | 2 |
|
3 | 3 | This extension offers a custom suggestion service for [Copilot for Xcode](https://github.com/intitni/CopilotForXcode), allowing you to leverage a chat model to enhance the suggestions provided as you write code. |
4 | 4 |
|
@@ -27,31 +27,39 @@ The app supports three types of suggestion services: |
27 | 27 | - Models with completions API |
28 | 28 | - [Tabby](https://tabby.tabbyml.com) |
29 | 29 |
|
30 | | -It is recommended to use Tabby since they have extensive experience in crafting prompts. |
| 30 | +If you are new to running a model locally, you can try [LM Studio](https://lmstudio.ai). |
| 31 | + |
| 32 | +### Recommended Settings |
| 33 | + |
| 34 | +- Use Tabby since they have extensive experience in code completion. |
| 35 | +- Use models with completions API with Fill-in-the-Middle support (for example, codellama:7b-code), and use the "Codellama Fill-in-the-Middle" strategy. |
| 36 | + |
| 37 | +### Others |
31 | 38 |
|
32 | | -If you choose not to use Tabby, it is advisable to use a custom model with the completions API and employ the default request strategy. |
| 39 | +In other situations, it is advisable to use a custom model with the completions API over a chat completions API, and employ the default request strategy. |
33 | 40 |
|
34 | 41 | Ensure that the prompt format remains as simple as the following: |
35 | 42 |
|
36 | | -``` |
| 43 | +``` |
37 | 44 | {System} |
38 | 45 | {User} |
39 | 46 | {Assistant} |
40 | 47 | ``` |
41 | 48 |
|
42 | | -If you are new to running a model locally, you can try [LM Studio](https://lmstudio.ai). |
43 | | - |
44 | 49 | ## Strategies |
45 | 50 |
|
46 | 51 | - Default: This strategy meticulously explains the context to the model, prompting it to generate a suggestion. |
47 | 52 | - Naive: This strategy rearranges the code in a naive way to trick the model into believing it's appending code at the end of a file. |
48 | 53 | - Continue: This strategy employs the "Please Continue" technique to persuade the model that it has started a suggestion and must continue to complete it. (Only effective with the chat completion API). |
| 54 | +- CodeLlama Fill-in-the-Middle: It uses special tokens to guide the models to generate suggestions. The models need to support FIM to use it (codellama:xb-code, startcoder, etc.). This strategy uses the special tokens documented by CodeLlama. |
| 55 | +- CodeLlama Fill-in-the-Middle with System Prompt: The previous one doesn't have a system prompt telling it what to do. You can try to use it in models that don't support FIM. |
49 | 56 |
|
50 | 57 | ## Contribution |
51 | 58 |
|
52 | | -Prompt engineering is a challenging task, and your assistance is invaluable. |
| 59 | +Prompt engineering is a challenging task, and your assistance is invaluable. |
53 | 60 |
|
54 | | -The most complex things are located within the `Core` package. |
| 61 | +The most complex things are located within the `Core` package. |
55 | 62 |
|
56 | | -- To add a new service, please refer to the `CodeCompletionService` folder. |
| 63 | +- To add a new service, please refer to the `CodeCompletionService` folder. |
57 | 64 | - To add new request strategies, check out the `SuggestionService` folder. |
| 65 | + |
0 commit comments