Skip to content

Commit e5168f9

Browse files
committed
Update README.md
1 parent 2769c13 commit e5168f9

2 files changed

Lines changed: 21 additions & 11 deletions

File tree

CustomSuggestionService/ContentView.swift

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,8 +178,10 @@ struct RequestStrategyPicker: View {
178178
case .continue:
179179
Text("Continue").tag(option.rawValue)
180180
case .codeLlamaFillInTheMiddle:
181-
Text("CodeLlama Fill-in-the-Middle (Good for Codellama:xb-code)")
182-
.tag(option.rawValue)
181+
Text(
182+
"CodeLlama Fill-in-the-Middle (Good for Codellama:xb-code and other models with Fill-in-the-Middle support)"
183+
)
184+
.tag(option.rawValue)
183185
case .codeLlamaFillInTheMiddleWithSystemPrompt:
184186
Text("CodeLlama Fill-in-the-Middle with System Prompt")
185187
.tag(option.rawValue)

README.md

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Custom Suggestion Service for Copilot for Xcode
1+
# Custom Suggestion Service for Copilot for Xcode
22

33
This extension offers a custom suggestion service for [Copilot for Xcode](https://github.com/intitni/CopilotForXcode), allowing you to leverage a chat model to enhance the suggestions provided as you write code.
44

@@ -27,31 +27,39 @@ The app supports three types of suggestion services:
2727
- Models with completions API
2828
- [Tabby](https://tabby.tabbyml.com)
2929

30-
It is recommended to use Tabby since they have extensive experience in crafting prompts.
30+
If you are new to running a model locally, you can try [LM Studio](https://lmstudio.ai).
31+
32+
### Recommended Settings
33+
34+
- Use Tabby since they have extensive experience in code completion.
35+
- Use models with completions API with Fill-in-the-Middle support (for example, codellama:7b-code), and use the "Codellama Fill-in-the-Middle" strategy.
36+
37+
### Others
3138

32-
If you choose not to use Tabby, it is advisable to use a custom model with the completions API and employ the default request strategy.
39+
In other situations, it is advisable to use a custom model with the completions API over a chat completions API, and employ the default request strategy.
3340

3441
Ensure that the prompt format remains as simple as the following:
3542

36-
```
43+
```
3744
{System}
3845
{User}
3946
{Assistant}
4047
```
4148

42-
If you are new to running a model locally, you can try [LM Studio](https://lmstudio.ai).
43-
4449
## Strategies
4550

4651
- Default: This strategy meticulously explains the context to the model, prompting it to generate a suggestion.
4752
- Naive: This strategy rearranges the code in a naive way to trick the model into believing it's appending code at the end of a file.
4853
- Continue: This strategy employs the "Please Continue" technique to persuade the model that it has started a suggestion and must continue to complete it. (Only effective with the chat completion API).
54+
- CodeLlama Fill-in-the-Middle: It uses special tokens to guide the models to generate suggestions. The models need to support FIM to use it (codellama:xb-code, startcoder, etc.). This strategy uses the special tokens documented by CodeLlama.
55+
- CodeLlama Fill-in-the-Middle with System Prompt: The previous one doesn't have a system prompt telling it what to do. You can try to use it in models that don't support FIM.
4956

5057
## Contribution
5158

52-
Prompt engineering is a challenging task, and your assistance is invaluable.
59+
Prompt engineering is a challenging task, and your assistance is invaluable.
5360

54-
The most complex things are located within the `Core` package.
61+
The most complex things are located within the `Core` package.
5562

56-
- To add a new service, please refer to the `CodeCompletionService` folder.
63+
- To add a new service, please refer to the `CodeCompletionService` folder.
5764
- To add new request strategies, check out the `SuggestionService` folder.
65+

0 commit comments

Comments
 (0)