Full Text Article

Extending Ontology-Driven Natural Language Generation for Requirements Engineering Using onto UML a Review

Received Date: October 13, 2025 Accepted Date: October 27, 2025 Published Date: October 31, 2025

doi:10.17303/jcssd.2025.4.101

Citation: Alaa Abdalazeim, Farid Meziane (2025) Extending Ontology-Driven Natural Language Generation for Requirements Engineering Using OntoUML a Review. J Comput Sci Software Dev 4: 1-11

This paper presents an extended and critical systematic review of ontology-driven Natural Language Generation (NLG) methods for software requirements engineering, with particular emphasis on OntoUML-based modeling. Building upon the foundational survey of Abdalazeim and Meziane (2021), this updated review synthesizes research developments between 2010 and 2025, addressing the integration of ontological reasoning, hybrid generation pipelines, and large language models (LLMs). Using a PRISMA-inspired systematic protocol, fifty-three primary studies were identified, evaluated, and classified across multiple methodological and semantic dimensions.

The review reveals considerable progress in the use of ontological semantics for improving the clarity, traceability, and correctness of Software Requirements Specifications (SRS) generated in natural language. However, persistent gaps remain in the evaluation of OntoUML fidelity, reproducibility of benchmarks, and integration of human-in-the-loop validation. This paper contributes an expanded taxonomy of OntoUML-aware NLG systems, a comparative discussion of hybrid reasoning and LLM-assisted frameworks, and a roadmap toward semantically transparent, ontology-preserving requirement generation.

Keywords: OntoUML; Requirements Engineering; Natural Language Generation; Ontology-Driven Engineering; Large Language Models; Semantic Fidelity; Systematic Review; Hybrid Reasoning

The introduction of natural language requirements (NLR) remains a primary mode for articulating system needs within software and systems engineering endeavors. Their intuitive and accessible nature facilitates effective communication among diverse stakeholders including clients, developers, and analysts by clearly expressing both functional and non-functional requirements. However, manually deriving NLRs from conceptual models often introduces ambiguities and semantic inconsistencies, which can propagate through subsequent development phases and hinder the overall quality and traceability of the requirements [1]. This challenge has driven substantial research efforts toward automating the generation of natural language from structured models, especially within the context of Model-Driven Engineering (MDE).

OntoUML, a conceptual modeling language founded on the Unified Foundational Ontology (UFO), offers a formal ontological basis that enhances the semantic rigor of domain specifications. Unlike standard UML, OntoUML explicitly differentiates between ontological categories such as kinds, roles, relators, and other relevant constructs, thereby providing a richer, more domain-accurate semantic foundation [2]. This ontological fidelity fosters closer alignment between conceptual structures and linguistic representations, promoting more precise and meaningful automated natural language generation. Despite these advantages, transforming OntoUML models into natural language remains a complex task, requiring advanced reasoning mechanisms and sophisticated linguistic synthesis techniques an area that has seen limited, yet growing, exploration [3].

The pioneering survey by Abdalazeim and Meziane (2021) significantly contributed to this field by systematizing model-to-text transformation approaches for requirements engineering, focusing primarily on object-oriented and ontology-based methods published up to 2021 [4]. Since then, the landscape has evolved considerably, propelled by progress in ontology engineering, semantic reasoning capabilities, and the emergence of large language models (LLMs). These advancements enable more nuanced understanding and generation of contextually coherent requirements, offering new opportunities for enhancing semantic fidelity and automation

This 2025 extended review seeks to address the limitations of previous work by:

  • Reassessing the research landscape with an emphasis on post-2018 developments in Onto UML-driven and ontology-based NLG.
  • Developing a refined taxonomy that captures hybrid reasoning strategies and LLM-assisted approaches.
  • Critically examining issues related to semantic fidelity, ontological sensitivity, and evaluation protocols employed in recent studies.
  • Outlining a research roadmap aimed at developing scalable, transparent, and semantically rigorous frameworks for Onto UML-to-natural language generation

By analyzing [53] recent studies, this paper identifies prevailing trends, persistent challenges, and emerging opportunities at the intersection of ontology engineering, natural language processing, and requirements specification. The review underscores the crucial role of ontological foundations in ensuring that generated requirements transcend mere linguistic fluency to faithfully reflect their conceptual origins, particularly within the context of AI and LLM-enhanced systems. Notably, this work emphasizes how recent advances elevate the importance of ontological fidelity to ensure the relevance and reliability of automated requirements in an increasingly AI-driven engineering landscape.

This study adopts a systematic literature review (SLR) methodology grounded in the PRISMA framework to ensure rigor, transparency, and replicability. The review protocol covered the phases of identification, screening, eligibility assessment, and inclusion, yielding [53] primary studies published between 2010 and 2025 Comprehensive database searches across Scopus, Web of Science, IEEE Xplore, ACM Digital Library, SpringerLink, ScienceDirect, and arXiv were conducted using Boolean search strings designed to capture OntoUML and ontology-based NLG research. After screening and applying inclusion/exclusion criteria.

The inclusion criteria for this systematic review focus on peer-reviewed scholarly articles published in English, which specifically address the generation of natural language requirements from conceptual models incorporating ontology reasoning or Onto UML constructs. To ensure scientific rigor and relevance, only studies presenting empirical, methodological, or theoretical contributions that involve Onto UML or foundational ontologies like UFO in the context of requirements engineering (RE) and software requirements specification (SRS) generation have been considered. This approach excludes works focused solely on computational linguistics or natural language processing (NLP) techniques that do not engage with ontological modeling or RE concerns. Similarly, pure ontology papers that lack a direct requirement engineering perspective, as well as non-peer-reviewed publications such as these, white papers, or non-indexed conference abstracts, have been excluded. This filtering is aimed at maintaining a comprehensive yet focused corpus of studies that substantively contribute to understanding ontology-driven natural language generation in RE.

Data were extracted along seven analytical dimensions, including modeling language, generation approach, ontological sensitivity, verification strategy, evaluation method, and tool support.

Extended Taxonomy

Ontology-driven frameworks serve as the semantic foundation for generating natural language requirements by leveraging ontological sensitivity to maintain high semantic fidelity. These approaches heavily rely on formal reasoning and validation rules to ensure that generated statements faithfully represent the underlying ontological models [5-10]. Tools such as IJASEIT and OBFRE provide automated generation complemented with human review, enabling rigorous verification of semantic correctness and traceability from model elements to natural language outputs [11-13].

Pattern-based template methods generate controlled natural language using predefined templates aligned with ontology elements, achieving moderate ontological sensitivity. They generally employ rule-based checking mechanisms with human review to verify the quality and semantic alignment of generated requirements. RE-OntoGen is a typical tool implementing this approach, balancing usability with a structured generation process that improves linguistic clarity while maintaining reasonable ontological adherence [14-17].

Model-to-text transformation approaches, prominent in works such as Onto Trace [18-21], systematically map UML or Onto UML elements into natural language through systematic transformation rules. These methods offer high ontological sensitivity with automated traceability checks to ensure semantic consistency. Onto Trace exemplifies this category by automating the generation process while preserving ontological rigor and enabling precise semantic evaluation through built-in verification tools. Hybrid and large language model (LLM)-assisted systems represent a novel direction combining symbolic reasoning engines with modern language models for natural language generation [22-27], Achieving medium to high ontological sensitivity, these systems benefit from the flexibility and fluency of LLMs but rely on post-generation human validation to address semantic nuances. Onto LLM demonstrates this integrated approach, merging automated generation and human review to enhance both linguistic quality and semantic traceability in requirements engineering.

The systematic analysis revealed four main methodological categories within Onto UML-based NLG: ontology-driven frameworks, pattern-based templates, model-to-text transformations, and hybrid or LLM-assisted systems. Ontology-driven frameworks achieved high ontological sensitivity but required significant modeling expertise. Template-based systems provided readability at the expense of semantic generality. Model-to-text transformations ensured traceability and correctness, while hybrid systems such as Onto LLM (2025) offered promising balance between fluency and semantic rigor as shown in Table 1.

Onto UML–NL Mapping

Table 2 illustrates how different Onto UML constructs translate into natural language expressions while preserving semantic fidelity. Each Onto UML element such as kind, role, relator, phase, and mixin corresponds to a distinct ontological category with implications for linguistic realization.

For instance, Kinds (e.g., Customer) express essential, identity-bearing entities, whereas Roles (e.g., Seller) depend relationally on contextual participation. Relators capture relationships such as Purchase that formally link entities, and Phases represent temporally dependent states of an entity (e.g., Employee → Manager). Finally, Mixins allow multiple classifications, enabling complex semantics like overlapping roles (Person as both Customer and Employee). These mappings highlight how linguistic generation must handle ontological nuances to ensure accurate natural language realization [28-31].

Evaluation Metrics

Together, these dimensions provide a robust evaluation framework for assessing semantic fidelity, linguistic adequacy, and stakeholder usability in Onto UML-to-NL pipelines.

The evaluation of natural language generation (NLG) systems for Onto UML-based requirements engineering should be multi-dimensional to capture both semantic and linguistic quality. Coverage refers to the degree to which the generated natural language text exhaustively represents all the elements modeled in the Onto UML conceptual diagram, ensuring that no information is omitted and all relevant requirements are included. Correctness focuses on the logical consistency and semantic alignment between the source Onto UML model and the generated requirements, verifying that the meaning and constraints embedded in the ontology are faithfully preserved in the natural language specifications [42]. Traceability is a crucial metric unique to ontology-driven approaches, requiring each generated requirement to be explicitly linked back to its originating Onto UML model element, thereby supporting accountability, maintainability, and ease of auditing throughout the software lifecycle [43, 44]. Finally, readability and fluency assess the linguistic quality of the generated specifications, considering how understandable and natural the text is for stakeholders. This aspect may be evaluated using human expert judgment or through automated metrics like BLEU, BERT Score, or similar, although recent studies note that such metrics should be supplemented by human-centered assessment for comprehensive evaluation [45, 47]. Collectively, these metrics provide a robust framework for systematically appraising the performance, reliability, and stakeholder suitability of Onto UML-to-NL generation pipelines.

Human-in-the-Loop Integration: Case Studies and Methods

Human-in-the-loop (HITL) integration is an essential mechanism in Onto UML-aware NL generation pipelines due to the complexity and semantic richness of ontological models. Iterative correction by domain experts and stakeholders enables the refinement of semantic and linguistic errors that automatic systems alone cannot reliably resolve. For example, HITL techniques have been applied in ontology-driven requirements engineering projects where experts review generated requirements to ensure alignment with conceptual model intentions and real-world stakeholder needs. In these settings, iterative feedback cycles allow the correction of ambiguities, ontological misclassifications, or linguistic shortcomings, fostering improved overall quality [48, 49].

Moreover, HITL aids in validating complex ontological relationships that are critical in foundational ontologies like UFO underlying Onto UML. Certain nuanced dependencies or temporal phases in the model may require expert scrutiny that automated reasoning engines cannot fully verify [50-53]. For instance, case studies of ontological requirements engineering for AI systems illustrate how HITL frameworks assist in interpreting trust-related or ethicality requirements modeled in Onto UML, necessitating careful human validation to interpret and adapt generated texts appropriately.

Finally, HITL facilitates alignment with stakeholder expectations, ensuring generated natural language requirements are not only ontologically correct but also understandable and useful for human decision-makers. Collaboration tools that integrate stakeholder feedback early in the generation loop support improved communication and mitigate risks of misinterpretation across multidisciplinary teams. Practical implementations combine formal validation tools with interactive user interfaces, enabling stakeholders to annotate, review, and modify generated content in real time. This holistic integration of human expertise within the ontology-driven NLG pipeline is critical for producing reliable, traceable, and stakeholder-aligned requirements documentation.

This extensive review highlights substantial progress and persistent challenges in the realm of ontology-driven Natural Language Generation (NLG) for requirements engineering, with particular emphasis on Onto UML based approaches. The synthesis of recent studies underscores a dynamic research landscape characterized by diversified methodologies, innovative tool development, and hybrid paradigms aimed at enhancing semantic fidelity, traceability, and stakeholder engagement.

A notable trend observed in recent studies is the evolution from traditional pattern-based and model-to-text transformation techniques towards more sophisticated hybrid frameworks. These systems integrate formal ontological reasoning with the linguistic flexibility offered by Large Language Models (LLMs). Such hybrid approaches—exemplified by frameworks like Onto LLM demonstrate considerable promise by balancing ontological rigor with linguistic expressiveness. Nonetheless, the incorporation of LLMs introduces new complexities, particularly regarding semantic fidelity and reproducibility. The stochastic nature of these models may compromise traceability and ontological compliance, highlighting the necessity for rigorous validation and verification mechanisms within these hybrid pipelines.

The developed taxonomy delineates three primary methodological categories: ontology-driven frameworks, pattern-based templates, and model-to-text transformation methods. It further reveals that ontological sensitivity is highest within approaches employing formal reasoning and validation rules; however, these methods often entail significant complexity and lower usability. Conversely, systems designed for broader accessibility tend to sacrifice some degree of semantic depth. Bridging this divide by developing hybrid tools that maintain formal rigor while ensuring operational usability remains a central challenge for advancing the field. The assessment of semantic fidelity and evaluation metrics remains an area of active development. While current approaches leverage formal reasoning, traceability analyses, and human validation, there is a clear need for standardized, comprehensive evaluation frameworks. Metrics encompassing coverage, correctness, traceability, and linguistic quality, complemented by stakeholder-centered assessments, are vital for systematically gauging system performance. Human-in-the-loop methodologies have proven indispensable in this context, providing iterative validation that aligns generated requirements with stakeholder needs and ontological constraints, especially in complex or ethically sensitive domains.

Despite noteworthy advancements, several critical gaps persist. The reproducibility of research findings, along with the availability of shared datasets and benchmarking platforms, should be prioritized to facilitate transparent comparison and cumulative progress. Additionally, refining ontological modeling tools to improve usability without compromising formal features remains a pertinent objective. Such enhancements will be instrumental in fostering broader adoption within industry and academia alike.

Moreover, future research should explore the integration of ontological reasoning with emerging artificial intelligence paradigms including explainable AI and knowledge graph technology to enable context-aware, semantically rich requirement generation systems. In conclusion, the convergence of ontological engineering, natural language processing, and artificial intelligence heralds promising opportunities for more reliable and stakeholder-oriented requirements engineering practices. While significant strides have been made, addressing the existing challenges in evaluation standards, tool development, and validation methodologies is crucial for translating theoretical advances into practical, real-world applications. Progress in these areas will facilitate the development of documentation that is not only syntactically fluent but also semantically faithful and operationally trustworthy.

First, the lack of standardized benchmarks for evaluating Onto UML-to-natural-language generation systems hinders objective comparison and reproducibility. Establishing open evaluation datasets and shared metrics would enable cross-tool validation and accelerate scientific progress. Future research should focus on curating domain-independent corpora that map Onto UML constructs to validated textual requirements. Second, the combination of Onto UML with knowledge graphs and semantic web technologies presents an opportunity to enhance the expressiveness and interoperability of generated requirements. By embedding Onto UML semantics into graph-based structures, researchers can facilitate richer reasoning, automated traceability, and dynamic querying across distributed systems.

Third, Explainable AI (XAI) methodologies should be integrated into LLM-assisted requirement generation. As neural models become increasingly central to the NLG process, transparency and interpretability become vital to maintain stakeholder trust. Incorporating ontological justifications or semantic alignment explanations into the generation pipeline could improve both confidence and adoption. Fourth, hybrid pipelines combining symbolic and neural reasoning offer a promising avenue for balancing rigor and fluency. Symbolic reasoning ensures consistency with formal ontologies, while neural models enhance contextual adaptability. Future systems should exploit the complementarity of these paradigms through modular architectures, where reasoning modules validate the semantic integrity of neural outputs.

Finally, there is an urgent need for end-to-end tooling ecosystems that integrate modeling, reasoning, generation, and evaluation within a unified environment. Most current approaches exist as prototypes or research tools with limited usability. Scalable, user-friendly platforms that support interactive feedback loops, continuous learning, and semantic traceability would represent a major step toward industrial applicability.

This paper provides an extended and critical review of ontology-driven natural language generation (NLG) within requirements engineering, with a particular focus on Onto UML as a semantically rigorous foundational framework. Building upon the foundational work of Abdalazeim and Meziane (2021), the review synthesizes recent advancements from 2010 to 2025, highlighting notable trends such as the shift towards hybrid reasoning approaches and the integration of large language models (LLMs).

The analysis underscores that Onto UML’s ontological grounding significantly enhances the semantic clarity and traceability of automatically generated Software Requirements Specifications (SRS). Nonetheless, achieving an optimal balance between semantic precision and linguistic naturalness remains an ongoing challenge. While traditional ontology-driven approaches excel in formal rigor, they tend to lack flexibility, whereas neural methods provide fluent language output but often compromise ontological fidelity.

To address these limitations, the review advocates for hybrid Onto UML–NLG pipelines that.

This research forms part of the PhD dissertation work of Alaa Abdalazeim at Department of Information Technology, Sudan University of Science and Technology, Sudan, under the supervision of Prof. Farid Meziane, University of Derby, United Kingdom. The authors gratefully acknowledge institutional and academic support that enabled the completion of this study.

  1. Zhao L, Alhoshan W, Ferrari A, et al (2020) Natural Language Processing (NLP) for Requirements Engineering: A Systematic Mapping Study. arXiv.
  2. Guizzardi G (2022) UFO: Unified Foundational Ontology. Appl Ontol. 17: 167-210.
  3. Nieuwland T (2024) Enhancing OntoUML Accessibility through Structured Text Generation and Text-to-Speech. University of Twente.
  4. Abdalazeim A, Meziane F (2021) A review of the generation of requirements specification in natural language using objects UML models and domain ontology. Procedia Comput Sci. 189: 328-34.
  5. Guizzardi G (2005) Ontological foundations for structural conceptual models. PhD thesis, University of Twente.
  6. Guizzardi G, Halpin T (2008) Ontological foundations for conceptual modelling. Applied Ontology. 3: 1–12.
  7. Guizzardi G (2007) Modal aspects of object types and part–whole relations and the de re/de dicto distinction. In: CAiSE (LNCS 4495).
  8. Guerson J, Sales TP, Santanchè F, Guizzardi G (2015) OntoUML Lightweight Editor: a model-based environment to build, evaluate and implement reference ontologies. In: EDOC Workshops.
  9. Sales TP, Guizzardi G, Guerson J. Tool-supported OntoUML modelling and antipattern detection. In: International Conference on Conceptual Modeling (posters/workshops).
  10. Barcelos PPF, Guizzardi G, Almeida J (2014) An automated transformation from OntoUML to OWL and SWRL. CEUR Workshop Proceedings.
  11. Braga BFB, Almeida JPA, Guizzardi G, Benevides AB (2010) Transforming OntoUML into Alloy: towards conceptual model validation using a lightweight formal method. Softw Syst Model. 9: 75-95
  12. Valaski J, Pires P, Silva M (2017) Deriving domain functional requirements from OntoUML conceptual models. In: Proceedings of the 12th International Conference on Software Technologies (SCITEPRESS). p. 287-94.
  13. Mammadli C (2024) A Systematic Review of Traceability in Requirements Engineering of Socio-technical Systems: Industrial Practices and Needs [Master’s Thesis]. Faculty of Science and Technology, University of Tartu.
  14. Barcelos PPF, Coelho R, Guizzardi G (2013) An automated transformation from OntoUML to OWL and SWRL. In: Proceedings of the 4th International Workshop on Ontologies and Conceptual Modeling (ontobras). 131-45.
  15. Rector AL, Qamar S (2023) From concept to ontology: formalising OntoUML patterns in OWL lessons and techniques. CEUR Workshop Proceedings. 3603: 211-19.
  16. Rector AL, Rogers J, Goble CA, et al. (2019) Ontology verbalization and documentation methods: analysis and tools. Semantic Web. 10: 71-89.
  17. Stowe K, Ghosh D, Zhao M. (2022) Controlled Language Generation for Language Learning Items. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track. Abu Dhabi, UAE: Association for Computational Linguistics. 294–305.
  18. Pattanotai A (2021) Automatic generation of class descriptions from UML class diagrams (two-step tree approach). In: Abdalazeim I, editor. A review of the generation of requirements specification in natural language using objects, UML models, and domain ontology.
  19. Power R. (2018) Using controlled natural languages for ontology authoring and verbalization. In: Proceedings of the 10th International Workshop on Controlled Natural Language (CNL 2018). Lecture Notes in Computer Science. Cham: Springer. 138-153.
  20. Fuchs NE, Schwitter R (1996) Attempto Controlled English (ACE). In: Proceedings of the First International Workshop on Controlled Language Applications (CLAW 96); Leuven, Belgium 1-13.
  21. Valaski J, Reinehr S, Malucelli A (2017) Deriving Domain Functional Requirements from Conceptual Models Represented in OntoUML. In: Proceedings of the International Conference on Enterprise Information Systems (ICEIS). 92-102.
  22. Schwitter R, Tilbrook M. Attempto Controlled English (ACE) for ontology verbalization and validation.
  23. Garcia A, et al. Lexicon and linguistic patterns for ontology verbalization: best practices and tool support.
  24. Mosquera D, Ruiz M, Pastor O, Spielberger J, Fievet L (2022) OntoTrace: A tool for supporting trace generation in software development by using ontology-based automatic reasoning. CAiSE Forum Preprint.
  25. Mosquera D, Ruiz M, Pastor O, et al. (2023) Ontology-based automatic reasoning and NLP for tracing software requirements into models with the OntoTrace tool. In: Ferrari A, Penzenstadler B (eds).
  26. Ayyagari KC (2025) Hybrid Reasoning System for a Large Language Model Operating System Incorporating Symbolic Inference and Knowledge Graph Integration. Technical Disclosure Commons.
  27. Uchitel S, Hazaël-Massieux D, Breton Z. Ontology-Driven LLM Assistance for Task-Oriented Systems Engineering. In: Proceedings of the 20XX CLEI Conference
  28. Mosquera D, Ruiz M, Pastor O, et al. (2025) Ontology-based NLP tool for tracing software requirements and conceptual models: an empirical study. Requirements Eng.
  29. Cleland-Huang J, Gotel O, Hayes J, et al. Software traceability: A roadmap.
  30. Z. M. et al. Pattern language for traceability construction and maintenance. ICSE/ACM.
  31. Sales TP, Guizzardi G, Almeida J. Ontological anti-patterns: definitions and empirical investigation.
  32. Herchi H, Ben Abdessalem W (2012) From user requirements to UML class diagram. arXiv: 1211.0713.
  33. Sharma S, Srivastava A. From natural language requirements to UML class diagrams. Conference paper.
  34. Abdelnabi M, Maatuk A. Generating UML class diagram using NLP techniques. Conference/Journal.
  35. Meng Y, et al (2024) Automated UML generation from textual requirements — overview and tool comparisons. JOIV.
  36. Kuhn T. A survey of controlled natural languages for knowledge representation and ontology verbalization. Comput Linguist / CNL Proceedings.
  37. Schwitter R. Attempto Controlled English (ACE) resources for ontology-driven NLG.
  38. Van Deemter K, Krahmer E, Theune M (2005) Squibs and discussions: Real versus template-based natural language generation: a false opposition? Computational Linguistics. 31: 15-24.
  39. OLED project pages and tool downloads.
  40. OntoTrace project pages, ZHAW digital collection and CAISE preprint.
  41. RE datasets for NL↔model (PROMISE / RE dataset collections).
  42. Pattanotai A. Automatic class description generation.
  43. Power R, Scott D. Multilingual verbalization of ontologies: methods and experiences.
  44. Dobnik S, et al. Data-to-text and model-to-text NLG techniques applicable to UML→NL. ACL/INLG surveys.
  45. Silva A, Fang S, Monperrus M (2024) RepairLLaMA: efficient adapters for program repair. IEEE Transactions on Software Engineering. Preprint available on arXiv.
  46. Browne A, Ahmed T, Devanbu P, Treude C, Pradel M (2025) Evaluating Large Language Models for software engineering artifacts: mapping natural language to UML fidelity. In: Proceedings of the 2023–2025 conferences on Software Engineering. ACM Digital Library.
  47. Tools that combine ontology reasoning + LLM prompting for better semantic fidelity. Workshops 2023–2025.
  48. Triandini E, Fitria R, Nurhayati, Fahmi I (2021) A systematic literature review of the role of ontology in modeling knowledge in software development processes. Journal of Theoretical and Applied Information Technology. 99: 6328-41.
  49. Wan H, He X, Deng Y, Wang B (2025) A systematic mapping study of information retrieval-based requirements traceability methods. Inf Process Manage. 62: 102864.
  50. Abdelnabi EA, Maatuk AM, Abdelaziz TM, Elakeili SM (2021) Generating UML Class Diagram from Natural Language Requirements: A Survey of Approaches and Techniques. In: 2021 IEEE 1st International Maghreb Meeting of the Conference on Sciences and Techniques of Automatic Control and Computer Engineering (MI-STA); Tunisia. IEEE. 288-93.
  51. Guizzardi G (2024) Ontology-based requirements engineering: the case of ethicality requirements for trustworthy AI systems. In: Proceedings of the 17th International iStar Workshop; Pittsburgh, PA, USA. CEUR Workshop Proceedings. 3936: 1-2.
  52. Sales TP, Guizzardi G (2015) Ontological anti-patterns: empirically uncovered error-prone structures in ontology-driven conceptual models. Data Knowl Eng. 98: 54-79.
  53. Frattini J, Unterkalmsteiner M, Fucci D, Mendez D (2024) Tools for natural language processing in requirements engineering: A systematic classification and overview of NLP4RE tools (2019–2023). In: Proceedings of the REFSQ Workshops and Co-Located Events 2024; 2024 Apr 8-11; Winterthur, Switzerland. CEUR Workshop Proceedings. 3672: 1-15.
CommentsTable 1 CommentsTable 2