Contact Us | Language: čeština English
| Title: | Evaluating NLP tools for AI in software requirements analysis |
| Author: | Okechukwu, Cornelius Chimuanya; Šilhavý, Radek; Šilhavý, Petr |
| Document type: | Conference paper (English) |
| Source document: | Lecture Notes in Networks and Systems. 2025, vol. 1563 LNNS, p. 470-482 |
| ISSN: | 2367-3389 (Sherpa/RoMEO, JCR) |
| ISBN: | 9789819652372 |
| DOI: | https://doi.org/10.1007/978-3-032-00715-5_31 |
| Abstract: | Software requirements analysis is increasingly automated by applying natural language processing (NLP) tools, enhancing efficiency and precision. This research employs the Mendeley FR_NFR dataset to evaluate the classification of functional requirements (FR) and non-functional requirements (NFR) utilising three NLP tools: NLTK, OpenAI, and spaCy. The evaluation uses performance indicators like F1-score, recall, accuracy, precision, and confusion matrices. OpenAI is a good option for high-stakes applications because of its 94% F1 score and exceptional accuracy, even with the related API expenses. With 83% accuracy and 0.1 s per query, SpaCy is ideal for real-time applications because it balances speed and efficiency. With its 68% accuracy rate, NLTK’s rule-based methodology is still a viable choice for prototyping or in controlled settings where transparency is crucial. With an average accuracy of 92%, the results show that OpenAI’s transformer-based model performs better than NLTK and spaCy, even though spaCy has an advantage in entity recognition. This study provides practitioners with critical insights by elucidating the trade-offs between accuracy, interpretability, and computational efficiency. |
| Full text: | https://link.springer.com/chapter/10.1007/978-3-032-00715-5_31 |
| Show full item record | |
| Files | Size | Format | View |
|---|---|---|---|
|
There are no files associated with this item. |
|||