Capabilities for self-correction of responses #7
slobentanzer
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
For the error correction in KG queries: Typically the model can make wrong assumptions on the KG structure, e.g. not knowing the direction of the relationship "DRUG" targets "PROTEIN", as an idea we could have basic references of correct Cypher queries we can compare to. We can use a semantic distance score fro checking up the correctness of the model generated queries: https://python.langchain.com/docs/guides/evaluation/string/string_distance#configure-the-string-distance-metric |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This idea relates to the conversation logic of BioChatter. At the moment, we have a basic conversation with history, as well as a corrective agent which only enters the conversation if it has found something to correct. However, it will just state this correction to the user, and the logic ends there. What could be imagined is that the model instead goes back to the original model that made the mistake and asks it to correct that mistake. This would require an extension of the correction logic in BioChatter, as well as the development of prompts that make this correction effective.
Empty results from knowledge graph query
For instance, in
prompts.py
, we generate Cypher queries for a given BioCypher knowledge graph; in some cases, the model gets those wrong, and we receive an empty result. The correction logic could be triggered there, an being given the user's question, the LLMs query, and information that the query may be erroneous, could generate a response to the model, asking it to correct its mistake. The regenerated query could then be run for the user. If it succeeds, great, if not, we could generate a report for the user of the failure, so they can manually intervene.Beta Was this translation helpful? Give feedback.
All reactions