On the Concerns of Developers When Using GitHub Copilot: Conclusion & References

cover
4 Mar 2024

Authors:

(1) Xiyu Zhou, School of Computer Science, Wuhan University, Wuhan, China;

(2) Peng Liang, School of Computer Science, Wuhan University, Wuhan, China;

(3) Zengyang Li, School of Computer Science, Central China Normal University, Wuhan, China;

(4) Aakash Ahmad, School of Computing and Communications, Lancaster University Leipzig, Leipzig, Germany;

(4) Mojtaba Shahin, School of Computing Technologies, RMIT University, Melbourne, Australia;

(4) Muhammad Waseem, Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland.

VII. CONCLUSIONS

In this study, we focused on the issues users may encounter when using GitHub Copilot, as well as their underlying causes and potential solutions. Following identifying the RQs, we collected data from GitHub Issues, GitHub Discussions and SO posts. After manual screening, we obtained 476 GitHub Issues, 706 GitHub Discussions, and 184 SO posts related to Copilot and got a total of 1399 issues, 337 causes, and 497 solutions based on our data extraction criteria. The results indicate that Usage Issue and Compatibility Issue are the most common problems faced by users, Copilot Internal Issue, Network Connection Issue, and Editor/IDE Compatibility Issue are identified as the most usual causes of issues, and Bug Fixed by Copilot, Modify Configuration/Setting and Use Suitable Version are the predominant solution. Our findings suggest that Copilot should enhance compatibility across various IDEs and editors, simplify the configuration, improve the quality of generated code, and address concerns related to intellectual property and copyright. Additionally, users require more customization options to tailor Copilot’s behavior and have more control over the content generated by Copilot. In light of the additional time required for code suggestion verification when utilizing Copilot, the integration of a code explanation feature becomes imperative to enhance its overall utility and effectiveness in practical development scenarios.

In the next step, we plan to combine a survey and code testing experiments to evaluate the actual usage of Copilot by users, as well as its performance in terms of security, maintainability, and other aspects.

REFERENCES

[1] M. Robillard, R. Walker, and T. Zimmermann, “Recommendation systems for software engineering,” IEEE Software, vol. 27, no. 4, pp. 80– 86, 2010.

[2] S. Luan, D. Yang, C. Barnaby, K. Sen, and S. Chandra, “Aroma: Code recommendation via structural code search,” Proceedings of the ACM on Programming Languages, vol. 3, pp. 1–28, 2019.

[3] Gartner Identifies the Top 10 Strategic Technology Trends for 2024. https://tinyurl.com/2p879w7s.

[4] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al., “Program synthesis with large language models,” arXiv preprint abs/2108.07732, 2021.

[5] GitHub Copilot · Your AI Pair Programmer. https://github.com/features/ copilot.

[6] H. Pearce, B. Ahmad, B. Tan, B. Dolan-Gavitt, and R. Karri, “Asleep at the keyboard? assessing the security of github copilot’s code contributions,” in Proceedings of the 43rd IEEE Symposium on Security and Privacy (S&P), pp. 754–768, IEEE, 2022.

[7] B. Yetistiren, I. Ozsoy, and E. Tuzun, “Assessing the quality of github copilot’s code generation,” in Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE), pp. 62–71, ACM, 2022.

[8] N. Nguyen and S. Nadi, “An empirical evaluation of github copilot’s code suggestions,” in Proceedings of the 19th IEEE/ACM International Conference on Mining Software Repositories (MSR), pp. 1–5, IEEE, 2022.

[9] S. Imai, “Is github copilot a substitute for human pair-programming? an empirical study,” in Proceedings of the 44th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 319–321, IEEE, 2022.

[10] S. Barke, M. B. James, and N. Polikarpova, “Grounded copilot: How programmers interact with code-generating models,” Proceedings of the ACM on Programming Languages, vol. 7, pp. 1–27, 2023.

[11] S. Peng, E. Kalliamvakou, P. Cihon, and M. Demirer, “The impact of ai on developer productivity: Evidence from github copilot,” arXiv preprint abs/2302.06590, 2023.

[12] B. G. Glaser, “The constant comparative method of qualitative analysis,” Social Problems, vol. 12, no. 4, pp. 436–445, 1965.

[13] X. Zhou, P. Liang, B. Zhang, Z. Li, A. Ahmad, M. Shahin, and M. Waseem, Dataset of the Paper “On the Concerns of Developers When Using GitHub Copilot”, 2023.

[14] J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological Measurement, vol. 20, no. 1, pp. 37–46, 1960.

[15] GitHub Copilot Chat beta. https://github.blog/ 2023-09-20-github-copilot-chat-beta-now-available-for-all-individuals/.

[16] Copilot.vim. https://github.com/github/copilot.vim.

[17] Configuring network settings for GitHub Copilot. https://tinyurl.com/ 39vphp46.

[18] GitHub Copilot X. https://github.com/features/preview/copilot-x.

[19] B. Zhang, P. Liang, X. Zhou, A. Ahmad, and M. Waseem, “Demystifying practices, challenges and expected features of using github copilot,” International Journal of Software Engineering and Knowledge Engineering, 2023.

[20] C. Bird, D. Ford, T. Zimmermann, N. Forsgren, E. Kalliamvakou, T. Lowdermilk, and I. Gazit, “Taking flight with copilot: Early insights and opportunities of ai-powered pair-programming tools,” ACM Queue, vol. 20, no. 6, pp. 35–57, 2023.

[21] R. Wang, R. Cheng, D. Ford, and T. Zimmermann, “Investigating and designing for trust in ai-powered code generation tools,” arXiv preprint abs/2305.11248, 2023.

[22] C. Wang, J. Hu, C. Gao, Y. Jin, T. Xie, H. Huang, Z. Lei, and Y. Deng, “Practitioners’ expectations on code completion,” arXiv preprint abs/2301.03846, 2023.

[23] M. Jaworski and D. Piotrkowski, “Study of software developers’ experience using the github copilot tool in the software development process,” arXiv preprint abs/2301.04991, 2023.

[24] J. T. Liang, C. Yang, and B. A. Myers, “A large-scale survey on the usability of ai programming assistants: Successes and challenges,” in Proceedings of the 45th International Conference on Software Engineering (ICSE), ACM, 2024.

[25] G. Sandoval, H. Pearce, T. Nys, R. Karri, B. Dolan-Gavitt, and S. Garg, “Security implications of large language model code assistants: A user study,” arXiv preprint abs/2208.09727, 2022.

This paper is available on arxiv under CC 4.0 license.