# SCER **Repository Path**: ForeverMeteor/SCER ## Basic Information - **Project Name**: SCER - **Description**: SCER的官方代码仓库 - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-04-09 - **Last Updated**: 2025-05-24 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Self-Consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering 这是论文**Self-Consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering**(SCER)的官方仓库。 ## 安装需求 ``` pip pip install -r requirement.txt ``` ## 运行实验 1. 将评估数据集放于data/eval下;将图数据集放于data/graph下 2. 将自己的镜像网站填入CONSTANT的url变量中,API_KEY填入CONSTANT的api_key变量中 3. 依次运行SC_only.py、KB_only.py,结果将储存至result中 关于第2步:若要运行调用接口的模型则按原文操作;若要自行部署模型,则: 1. 在self_consistency文件夹下增加新类并继承SelfConsistency类 2. 按照已有ChatGLM的模板自行实现调用自己的PLM ## 引用 引用本文: ```bibtex @inproceedings{zhao2024self, title={Self-consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering}, author={Zhao, Jinxiong and Ma, Zhicheng and Zhao, Hong and Zhang, Xun and Liu, Qichuan and Zhang, Chentao}, booktitle={International Conference on Intelligent Computing}, pages={493--504}, year={2024}, organization={Springer} } ``` ## 致谢 本文灵感来源于CoT, SC和RR。 ```bibtex @article{wei2022chain, title={Chain-of-thought prompting elicits reasoning in large language models}, author={Wei, Jason and Wang, Xuezhi and Schuurmans, Dale and Bosma, Maarten and Xia, Fei and Chi, Ed and Le, Quoc V and Zhou, Denny and others}, journal={Advances in neural information processing systems}, volume={35}, pages={24824--24837}, year={2022} } ``` ```bibtex @article{wang2022self, title={Self-consistency improves chain of thought reasoning in language models}, author={Wang, Xuezhi and Wei, Jason and Schuurmans, Dale and Le, Quoc and Chi, Ed and Narang, Sharan and Chowdhery, Aakanksha and Zhou, Denny}, journal={arXiv preprint arXiv:2203.11171}, year={2022} } ``` ```bibtex @article{he2022rethinking, title={Rethinking with retrieval: Faithful large language model inference}, author={He, Hangfeng and Zhang, Hongming and Roth, Dan}, journal={arXiv preprint arXiv:2301.00303}, year={2022} } ```