-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Search with Context Similarity #2
base: development
Are you sure you want to change the base?
Search with Context Similarity #2
Conversation
Base Merge
QA: Issue |
I tested around with few models, Claude 3.5 Sonnet and OpenAI GPT4o performed the best, other model hallucinated even with very low temperature and top_p value of 0.5 where ever possible. |
@0x4007 Could you please check the model responses? And are there any questions that could judge the retrieval performance on topics that are discussed very rarely or only once? |
Gold star? Non established. Will need to work on this asap. DM me we can collaborate on this. |
"description": "Ubiquibot plugin template repository with TypeScript support.", | ||
"author": "Ubiquity DAO", | ||
"description": "A highly context aware organization integrated chatbot", | ||
"author": "Ubiquity OS", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"author": "Ubiquity OS", | |
"author": "Ubiquity DAO", |
- DAO is the organization.
- OS is the software.
- DevPool is the community.
repo: repo || payload.repository.name, | ||
issue_number: issueNum || payload.issue.number, | ||
}) | ||
.then(({ data }) => data as Issue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty unusual syntax to mix async await and then
|
||
const issue = await fetchIssue(params); | ||
|
||
let comments: IssueComments | ReviewComments = []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to have two separate arrays for each data type?
export async function runPlugin(context: Context) { | ||
const { | ||
logger, | ||
env: { UBIQUITY_OS_APP_SLUG }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be renamed to
env: { UBIQUITY_OS_APP_SLUG }, | |
env: { UBIQUITY_OS_APP_NAME }, |
Model Cost Comparison
|
Unused files (7)
Unlisted dependencies (5)
|
Seems like we get what we pay for :) |
QA: Can parse through Linked code files in Issue spec and answer questions based on that. |
Your QA results are quite interesting. We should prompt and focus on brevity. Can you display (add an extra comment) which shows the entire passed in context? I would like to audit this. Once this is set up I would like to try asking a couple questions. |
The plugin is running at |
On average, these responses cost approximately $0.22, based on an input token count of 2,500 and an output token count of 3,300 on the o1-mini model. While these responses, are quite expensive, these provide a good overview for task. |
Thats mostly fine. Any price these models charge us are orders of magnitude cheaper than developer time, particularly those on base pay. |
II don't have access to |
Actually we should use mini because it has a much larger usable context length. Preview has a lot more internal reasoning tokens spend. I can borrow a key but as I understand both o1 models are available for the same tier of OpenAI account, meaning, if you have access to one, you should have access to both. |
Resolves #50
@ubiquityos
gpt command #1Results for Database fetching backfilling: