Jump to main content Hotkeys
Distributed and Self-organizing Systems
Distributed and Self-organizing Systems
Web Engineering Seminar (SS 2024)


Web Engineering Seminar (SS 2024)

Welcome to the homepage of the Web Engineering Seminar

This website contains all important information about the seminar, including links to available topics as well as information about the seminar process in general.

The interdisciplinary research area Web Engineering develops approaches for the methodological construction of Web-based applications and distributed systems as well as their continuous development (evolution). For instance, Web Engineering deals with the development of interoperable Web Services, the implementation of web portals using service-oriented architectures (SOA), fully accessible user interfaces or even exotic web-based applications that are voice controlled via the telephone or that are represented on TV and Radio.

The following steps are necessary to complete the seminar:

  • Preparation of a presentation about the topic assigned to you.
  • An additional written report of your topic.
  • Each report is reviewed by two or three other particpants.

Seminar chairs

samuel

gaedke


Contact

If you have any questions concerning this seminar or the exam as a participant, please contact us via OPAL.

We also offer a Feedback system, where you can provide anonymous feedback for a partiular session to the presenter on what you liked or where we can improve.

Participants

The seminar is offered for students of the following programmes (for pre-requisites, please refer to your study regulations):

Students who are interested in the Pro-, Haupt or Forschungsseminar (applies to all other study courses) will find all information here.

If your programme is not listed here, please contact us prior to seminar registration and indicate your study programme, the version (year) of your study regulations (Prüfungsordnungsversion) and the module number (Modulnummer) to allow us to check whether we can offer the seminar for you and find an appropriate mapping.

Registration

You may only participate after registration in the Seminar Course in OPAL

The registration opens on 21.03.2024 and ends on 07.04.2024 at 23:59. As the available slots are usually rather quickly booked, we recommend to complete your registration early after registration opens.

Topics and Advisors

Questions:

  • How can distributed data be acquired trustworthy with goverance?
  • What is a reasonable approach for introducing goverance to common web application architectures like MVC?
  • What are limits and open challenges for goverance for trustworthy data acquisition?

Questions:

  • Which challenges exist in querying large-scale RDF data?
  • What does HDT aim to improve and how does it work?
  • How can HDT be be applied?
  • What are limitations of HDT? How can arbitrary SPARQL queries be performed on a HDT dataset?

Literature:

  • Martínez-Prieto, Miguel A. & Arias, Mario & Fernández, Javier. (2012). Exchange and Consumption of Huge RDF Data. 7295. 437-452. 10.1007/978-3-642-30284-8_36.
  • Fernández, Javier & Martínez-Prieto, Miguel A. & Gutierrez, Claudio & Polleres, Axel & Arias, Mario. (2013). Binary RDF Representation for Publication and Exchange (HDT). Journal of Web Semantics. 19. 22-41. 10.1016/j.websem.2013.01.002.
  • Own research

Questions:

  • What are common ways to classify the one or several topics of a linked data ressource?
  • How accurate and fast are those approaches?
  • How to tackle heterogenity of the web when it comes to topic classifaction of linked data?

Literature:

  • own research

Questions:

  • What are Dark Patterns? Where can they be found? How are they defined? List commonly known taxonomies. What are Usability Smells and where is the difference to Dark Patterns?
  • What are Conversational User Interfaces? Show us different kinds and how they work. Which Dark Patterns could be adapted to Chatbots? Which Dark Patterns could be new in Chatbots?
  • You will get access to a dataset of negative user interactions with chatbots via a repository. Search additional samples and document the procedure (where did you search, which search terms did you use, list of all examples that you found, list of examples that you excluded, ...), add at least 20 new samples.
  • Create a codebook (here, a table) where you list both the given and the newly added samples and code them according to the following criteria: Is it an established Dark Pattern? Is it a potential new Dark Pattern? Is it a Usability Smell? Is it neither? Are information missing for a decision? Both students should code the samples independently and then meet together with the advisor to discuss the results and unclear cases. Present the consolidated results in a demonstration.
  • Reflect on the process: What did you learn? Where did you have problems? How could you solve them?

Literature:

  • Own research
  • Traubinger, V., Heil, S., Grigera, J., Garrido, A., Gaedke, M. (2024). In Search of Dark Patterns in Chatbots. In: Følstad, A., et al. Chatbot Research and Design. CONVERSATIONS 2023. Lecture Notes in Computer Science, vol 14524. Springer, Cham. https://doi.org/10.1007/978-3-031-54975-5_7
  • Colin M. Gray, Nataliia Bielova, Cristiana Santos, and Thomas Mildner. 2023. An Ontology of Dark Patterns: Foundations, Definitions, and a Structure for Transdisciplinary Action. https://arxiv.org/abs/2309.09640
  • Brignull, H.: Deceptive patterns. https://www.deceptive.design
  • Gray, C.M., Sanchez Chamorro, L., Obi, I., Duane, J.N.: Mapping the landscape of dark patterns scholarship: A systematic literature review. In: Companion Pub- lication of the 2023 ACM Designing Interactive Systems Conference. pp. 188–193 (2023)
  • Gray, C.M., Santos, C., Bielova, N.: Towards a preliminary ontology of dark pat- terns knowledge. In: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. pp. 1–9 (2023
  • Grigera, J., Garrido, A., Rivero, J.M., Rossi, G.: Automatic detection of usability smells in web applications. International Journal of Human-Computer Studies 97, 129–148 (2017)
  • Mathur, A., Kshirsagar, M., Mayer, J.: What makes a dark pattern... dark? Design attributes, normative considerations, and measurement methods. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1–18 (2021

Questions:

  • Create a corpus of Customer Service Chatbots that you analyze on certain criteria. The corpus should include at least 70 different chatbots. For this you will get access to a gitlab repository in which you can save your results. Build a strategy on how to systematically search for these customer service chatbots and describe this methodology.
  • The corpus should include at least information about the corporation, it's field, the source, how the chatbot introduces itself, if available information on the trained data, possible access restrictions, information on the user interface, etc.
  • Additionally have a look at the code and categorize it. The following questions can be used as an idea for criteria and added on - use the available literature to formulate specific criteria: Is the code openly accessible and readable? Are third party chatbots used? If yes, which ones are used? Are the chatbots included via API-call etc. from the outside? How are they called on in the code?

Literature:

  • own research
  • Adamopoulou, E., Moussiades, L. (2020). An Overview of Chatbot Technology. In: Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds) Artificial Intelligence Applications and Innovations. AIAI 2020. IFIP Advances in Information and Communication Technology, vol 584. Springer, Cham. https://doi.org/10.1007/978-3-030-49186-4_31
  • Adamopoulou, Eleni, and Lefteris Moussiades. "Chatbots: History, technology, and applications." Machine Learning with Applications 2 (2020): 100006. https://doi.org/10.1016/j.mlwa.2020.100006
  • Akma, N., Hafiz, M., Zainal, A., Fairuz, M., Adnan, Z.: Review of chatbots design techniques. Int. J. Comput. Appl. 181, 7–10 (2018). https://doi.org/10.5120/ijca2018917606
  • M. Baez, F. Daniel, F. Casati and B. Benatallah, "Chatbot Integration in Few Patterns," in IEEE Internet Computing, vol. 25, no. 3, pp. 52-59, 1 May-June 2021, doi: 10.1109/MIC.2020.3024605.

Questions:

  • How does a Systematic Literature Review work? Prepare a guideline for computer science students explaining the main aspects and include a list of relevant publications search engines/catalogues.
  • What does "systematic" mean in SLR, how is it different from other literature review methods? How does it compare to a Structured Literature Review? How does it compare to a Systematic Mapping Studies? What are risks and limitations of the method?
  • How are research questions represented/quantified? What does coding mean in this context?
  • How are search queries constructed? Explain the technique of query expansion for generating additional queries.
  • Which SLR artifacts should be provided to allow for reproducibility and replicability?
  • What tools exist to support SLRs? Demonstrate a suitable tool.

Literature:

  • Kitchenham, B. (2004). Procedures for Undertaking Systematic Reviews. https://www.inf.ufsc.br/~aldo.vw/kitchenham.pdf
  • Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009). Systematic literature reviews in software engineering - A systematic literature review. Information and Software Technology, 51(1), 7–15.
  • Brereton, P., Kitchenham, B. a., Budgen, D., Turner, M., & Khalil, M. (2007). Lessons from applying the systematic literature review process within the software engineering domain. Journal of Systems and Software, 80(4), 571–583.
  • Petersen, K., Vakkalanka, S., & Kuzniarz, L. (2015). Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology, 64, 1–18.
  • Díaz, O., Medina, H., & Anfurrutia, F. I. (2019). Coding-Data Portability in Systematic Literature Reviews. Proceedings of the Evaluation and Assessment on Software Engineering - EASE ’19, 178–187.
  • Khadka, R., Saeidi, A. M., Idu, A., Hage, J., & Jansen, S. (2013). Legacy to SOA Evolution: A Systematic Literature Review. In A. D. Ionita, M. Litoiu, & G. Lewis (Eds.), Migrating Legacy Applications: Challenges in Service Oriented Architecture and Cloud Computing Environments (pp. 40–71). IGI Global.
  • Jamshidi, P., Ahmad, A., & Pahl, C. (2013). Cloud Migration Research: A Systematic Review. IEEE Transactions on Cloud Computing, 1(2), 142–157.
  • Rai, R., Sahoo, G., & Mehfuz, S. (2015). Exploring the factors influencing the cloud computing adoption: a systematic study on cloud migration. SpringerPlus, 4(1), 197.
  • A. Hinderks, F. José, D. Mayo, J. Thomaschewski and M. J. Escalona, "An SLR-Tool: Search Process in Practice : A tool to conduct and manage Systematic Literature Review (SLR)," 2020 IEEE/ACM 42nd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), 2020, pp. 81-84.
  • PRISMA 2020 http://www.prisma-statement.org/

Questions:

  • What is empirical Software Engineering Evaluation and how can it be done?
  • Why is evaluation important in the research process?
  • What is the difference between a qualitative and quantitative evaluation? (When do you use which one? What are advantages and disadvantages?)
  • Prepare a list of evaluation methods and tools that can be used to evaluate software. Explain them and add relevant literature for these methods.
  • Demonstrate one quantitative and one qualitative method. For this, find a feasible research question, conduct a survey on it with each of the two methods, compute the results, discuss and present them. You can choose a low level topic on your own that is related to Web Engineering. Use statistical methods to compute the results.

Literature:

  • Own research
  • Creswell, J. W. (2014). Research design : Qualitative, quantitative, and mixed methods approaches (4. ed., in). SAGE. https://katalog.bibliothek.tu-chemnitz.de/Record/0008891954
  • Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., & Wesslén, A. (2012). Experimentation in Software Engineering. In Experimentation in Software Engineering (Vol. 9783642290). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29044-2
  • Chatzigeorgiou, A., Chaikalis, T., Paschalidou, G., Vesyropoulos, N., Georgiadis, C. K., & Stiakakis, E. (2015). A Taxonomy of Evaluation Approaches in Software Engineering. Proceedings of the 7th Balkan Conference on Informatics Conference - BCI ’15, 1–8. https://doi.org/10.1145/2801081.2801084
  • Wainer, J., Novoa Barsottini, C. G., Lacerda, D., & Magalhães de Marco, L. R. (2009). Empirical evaluation in Computer Science research published by ACM. Information and Software Technology, 51(6), 1081–1085. https://doi.org/10.1016/j.infsof.2009.01.002

Questions:

  • What is design science research? What are the objectives of design science research?
  • Which are activities? How research is conducted? How results are evaluated?
  • In which research areas of computer science this methodology is more practical?
  • Using design science research produce a viable and simplified artifact in the form of a construct, a model, a method and demonstrate activities

Literature:

  • Own research
  • Johannesson Perjons (2021), An Introduction to Design Science, https://link.springer.com/book/10.1007/978-3-030-78132-3

Questions:

  • How is a scientific work, especially a thesis in computer science, structured? What sections should a thesis contain and what purpose do they have? Give an overview.
  • What is the importance of an introduction? What should it contain? How long should it be? How is it related to the other sections in a scientific work?
  • What makes a "good" motivation for your scientific work? Why is it important for the readers? What are methods to write it so the reader can relate to the writer?
  • What is the scope of a scientific work? Why is it important? How should you include the scope in the introduction?
  • What are current and well known best practices/guidelines/schemes/principles/advices etc.? Which evidence base (e.g. experimental studies) are supporting them? Present them.
  • Choose a sufficient scientific work and work out a way to visually represent its whole structure. Show how the introduction, motivation and scope relate to the other parts.

Literature:

  • Own research
  • Peat, J., Elliott, E., Baur, L., & Keena, V. (2013). Scientific writing: easy when you know how. John Wiley & Sons. DOI:10.1002/9781118708019
  • X Barbara Minto: The Pyramid Principle. Pearson Education, 2009.
  • Mensh, B., & Kording, K. (2017). Ten simple rules for structuring papers. PLoS computational biology, 13(9), e1005619. DOI: https://doi.org/10.1371/journal.pcbi.1005619
  • J. M. Setchell, “Writing a Scientific Report,” in Studying Primates: How to Design, Conduct and Report Primatological Research, Cambridge: Cambridge University Press, 2019, pp. 271–298.
  • Blackwell, J., & Martin, J. (2011). A scientific approach to scientific writing. Springer Science & Business Media.
  • Williams, J. M., & Bizup, J. (2014). Lessons in clarity and grace. Pearson.
  • Oguduvwe, J. I. P. (2013). Nature, Scope and Role of Research Proposal in Scientific Investigations. IOSR Journal Of Humanities And Social Science (IOSR-JHSS), 17(2), 83-87. https://www.iosrjournals.org/iosr-jhss/papers/Vol17-issue2/L01728387.pdf

Questions:

  • What guidelines and principles exist to safeguard Good Research Practice?
  • How can these guidelines and principles be integrated into the research process?
  • What is scientific misconduct/scientific malpractice?
  • What is considered "high-quality research"? What are indicators thereof?

Literature:

  • Own research
  • The European Code of Conduct for Research Integrity http://www.allea.org/wp-content/uploads/2017/03/ALLEA-European-Code-of-Conduct-for-Research-Integrity-2017-1.pdf
  • Guidelines for Safeguarding Good Research Practice https://www.dfg.de/download/pdf/foerderung/rechtliche_rahmenbedingungen/gute_wissenschaftliche_praxis/kodex_gwp_en.pdf
  • Open Research Data and Data Management Plans https://erc.europa.eu/sites/default/files/document/file/ERC_info_document-Open_Research_Data_and_Data_Management_Plans.pdf

Questions:

  • What is the initial problem to be solved with R2RML?
  • How is R2RML different from other mappings, such as RDB2RDF?
  • What use cases is R2RML intended for, and what limitations exist?

Questions:

  • What are the differences between RDF 1.1 and RDF 1.2?
  • What is RDF-star and how does it differ from the current RDF standard?
  • What are Quoted Triples, how are they constructed, and what is their intended use?

Questions:

  • What are Patterns? What are Interaction Design Patterns? How are they connected? Explain their structure on an example (including several patterns).
  • What are their benefits for Web Engineers and Users?
  • Collect a dataset of 40 websites with chatbots and evaluate your interaction with the chatbot in a systematic way. Present your idea for a methodology in the Short Presentation.
  • Map your interactions to the list of Interaction Design Patterns by Tidwell et al. Present and analyse your results: Which patterns did you find? How often did you find each pattern? Did you find some in an adapted form? How was it adapted? Did you find interactions which are not yet listed by Tidwell et al? Describe them in the same schema.

Literature:

  • Own research
  • J. Tidwell, C. Brewer, and A. Valencia, Designing interfaces: patterns for effective interaction design, Third edition. Beijing [China] ; North Sebastopol, CA: O’Reilly, 2020. (available via uni library)
  • Preprint (will be provided) from V. Traubinger and M. Gaedke: Interaction Design Patterns of Web Chatbots
  • C. Alexander, S. Ishikawa, M. Silverstein, M. Jacobson, I. F. King, and S. Angel, A Pattern Language: Towns, Buildings, Construction. in Center for Environmental Structure series, no. v. 2. New York: Oxford University Press, 1977.
  • A. Shevat, Designing bots: creating conversational experiences, First edition. Beijing ; Boston: O’Reilly, 2017.
  • F. A. M. Valério, T. G. Guimarães, R. O. Prates, and H. Candello, “Chatbots Explain Themselves: Designers’ Strategies for Conveying Chatbot Features to Users,” JIS, vol. 9, no. 3, p. 1, Dec. 2018, doi: 10.5753/jis.2018.710.

Questions:

  • Knowledge graphs (KGs), a term coined by Google in 2012 to refer to its general-purpose knowledge base, are critical to both: they reduce the need for large labelled Machine Learning datasets; facilitate transfer learning; and generate explanations. KGs are used in many industrial AI applications, including digital twins, enterprise data management, supply chain management, procurement, and regulatory compliance.
  • Give a short introduction into the general approach and the state-of-the-art for Knowledge Graphs, Trustworthy and Trustworthy in KGs.
  • How can we make Knowledge Graphs Trustworthy?
  • What are the trustworthiness measurement for KGs and untrustworthy patterns in Knowledge Graphs?
  • Give 3 different examples of trustworthy KGs.
  • Demonstrate your findings by giving examples of trustworthy KGs in comparison with examples of untrustworthy patterns in KGs across fields of your choice.

Literature:

  • Jia, S., Xiang, Y., Chen, X., & Wang, K. (2019, May). Triple trustworthiness measurement for knowledge graph. In The World Wide Web Conference (pp. 2865-2871).
  • Yan, Y., Yu, P., Fang, H., & Wu, Z. (2022, June). Trustworthiness Measurement for Multimedia Domain Knowledge Graph. In 2022 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB) (pp. 1-5). IEEE.
  • Huber, R. and Klump, J.: The Dark Side of the Knowledge Graph - How Can We Make Knowledge Graphs Trustworthy?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13071, https://doi.org/10.5194/egusphere-egu2020-13071, 2020.
  • Ge, Y., Ma, J., Zhang, L., Li, X., & Lu, H. (2023). Trustworthiness-aware knowledge graph representation for recommendation. Knowledge-Based Systems, 278, 110865.

Questions:

  • • Give a short introduction of the explainable AI (XAI) and Knowledge Graphs (KGs) and an short introduction of the role of KGs in XAI.
  • • State and compare the respective opportunities and challenges of KGs in XAI.
  • • Where is potential for an integration of both approaches and what would be the benefit of such synergies for various applications? (For ex. What is the benefits of using KGs in explainable recommendation systems)
  • • Explain some ways KGs could be or already have been used to improve on XAI.
  • • Give a more detailed review, supplemented by a short demonstration of one successful realization of one combined approach.
  • • What are future trends in the XAI field?

Literature:

  • 1. Lecue, F. (2020). On the role of knowledge graphs in explainable AI. Semantic Web, 11(1), 41-51.
  • 2. Rajabi, E., & Etminani, K. (2022). Knowledge-graph-based explainable AI: A systematic review. Journal of Information Science, 01655515221112844.
  • 3. Tiddi, I., & Schlobach, S. (2022). Knowledge graphs as tools for explainable machine learning: A survey. Artificial Intelligence, 302, 103627.

Questions:

  • Explainable recommendation refers to personalized recommendation algorithms that address the problem of why – they not only provide users or system designers with recommendation results, but also explanations to clarify why such items are recommended.
  • • Give a short introduction into the general approach and the state-of-the-art for Explainable Recommendation Systems.
  • • What are the fairness measurement for Explainable Recommendation Systems.
  • • How can we obtain Fairness- aware Explainable recommendations using KGs?
  • • How can we mitigate the provider fairness in an Explainable Recommendation Systems?
  • • Give a more detailed review, supplemented by a short demonstration of one successful realization of one combined approach.1. Dinnissen, K., & Bauer, C. (2022). Fairness in music recommender systems: A stakeholder-centered mini review. Frontiers in big Data, 5, 913608.

Literature:

  • 1. Dinnissen, K., & Bauer, C. (2022). Fairness in music recommender systems: A stakeholder-centered mini review. Frontiers in big Data, 5, 913608.
  • 2. Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., & He, X. (2020). Bias and debias in recommender system: a survey and future directions. CoRR abs/2010.03240 (2020). arXiv preprint arXiv:2010.03240.
  • 3. Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., ... & Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30, 127-158.
  • 4. Balloccu, G., Boratto, L., Fenu, G., Marras, M. (2022). Hands on explainable recommender systems with knowledge graphs. In Proceedings of the 16th ACM Conference on Recommender Systems, pp. 710-713.

Questions:

  • • What is trustworthy AI? What are the characteristics that need to exist in AI to be trustworthy? How to measure the trustworthiness of AI?
  • • Why is trustworthy AI recommended when it is used in the healthcare domain rather than standard AI?
  • • Demonstrate the differences between trustworthy and untrustworthy AI by showing an example of both AI’s in chatbots.

Questions:

  • • What is machine unlearning? And in which domains can it be applied?
  • • What algorithms are used in machine unlearning? Explain the differences between Machine learning and Machine unlearning.
  • • Find a tutorial related to machine unlearning implementation and demonstrate it.

Questions:

  • Introdcution: Prompt engineering involves crafting text in a manner that can be comprehended and processed by a generative AI model. A prompt serves as the natural language description outlining the task to be executed by the AI. This process is integral to effectively instructing AI systems, enhancing their understanding and performance in various applications.
  • - An introduction to prompt engineering
  • - Best practices in prompt engineering
  • - Prompt Engineering with multimodal data, e.g. images, tables, audio, video, web pages, etc. How prompt engineering differs when handled with different data types
  • - Experiments on prompt engineering with multimodal data, e.g. images, tables, audio, video, web pages, etc.
  • - What are the issues and open challenges in prompt engineering with multimodal data.

Literature:

    Questions:

    • Introduction: logy refers to a structured framework for organizing and categorizing knowledge about a particular domain. t involves defining concepts, their properties, and the relationships between them in a systematic manner, often represented in a formal language such as OWL (Web Ontology Language) or RDF (Resource Description Framework). Ontologies are used to facilitate knowledge sharing, data integration, and reasoning within specific domains, ranging from medicine and biology to finance and engineering. Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text across diverse domains. Leveraging LLMs for ontology construction involves utilizing their natural language processing abilities to extract and organize knowledge from textual sources, thereby automating parts of the ontology engineering process.
    • - Give an introduction to LLMs and Ontologies
    • - How does LLMs help in creating ontologies
    • - What are the issues and open challenges in construction of ontologies using the traditional way.
    • - What are the issues and open challenges in construction of ontologies using LLMs.
    • - Provide the current trends and applications in advancing ontology construction using LLMs

    Literature:

      Questions:

      • What is the current state of Web Engineering research? To answer this question, systematically analyze all publications of the 2 venues listed under literature as detailed below. Your primary information used should be the title, authors/affiliations, keywords, and abstract.
      • For each publication, capture title, authors/affiliations, keywords, and abstract, venue, year, (for conference papers) name of track/workshop, (for journal articles) volume number, (for journal articles) issue number, (for journal articles) name of issue, page numbers in proceedings/issue, length of the publication, current number of citations of the publication, URL of online resource.
      • Based on your raw data collection, analyze the following aspects: 1. What are the main topics of research interest and in which areas of the Web Engineering field, along with the number of publications belonging to them? 2. What authors are publishing in these venues, from which affiliations, from which countries, along with the number of publications for each of these? 3. Which are the most cited articles (relative to their age), which topics/areas receive the most citations, which authors/affiliations/countries receive the most citations? 4. Considering the time dimension, are there any visible trends for aspects 1-3 over the 5 years considered?
      • Visualize your data and insights and provide the raw data in re-usable form (CSV).

      Literature:

      • Venue 1: ICWE Proceedings of last 5 complete years (2019-2023)
      • Venue 2: JWE Journals Issues of last 5 complete years (2019-2023)
      • For citation counts use: Google Scholar
      • Tool for analysis and inspiration for your data visualization: https://www.connectedpapers.com/

      Questions:

      • Provide an overview of the current state of using Generative AI with Large Language Models such as ChatGPT, Bard etc. as a tool to structure and write scientific texts (workshop/conference papers, journal articles, bachelor/master/phd theses).
      • What are existing guidelines/regulations of publishers/universities? How does the usage of Generative AI need to be highlighted in the resulting texts?
      • Outline the current discussion on their usage as a tool vs. authorship, intellectual property and quality concerns.
      • How will the availability of Large Language Models impact academia in research and in education in the coming years?
      • Experiment with a suitable model (e.g. ChatGPT, Bard), using it as a tool for writing different parts of a hypothetical master thesis (situation, motivation, problem analysis, research objectives/questions and scope, requirements, state of the art, solution draft, evaluation/experimentation plan). Ask your supervisor for the specific thesis task. Try different levels of prompts (thesis title only, title and a short description of your own, title and a detailed task description). Observe, which prompts you need, how you improve them iteratively, which ideas you have to provide, and the quality/completeness/suitability of the output.

      Seminar Opening

      The date and time of the seminar opening meeting will be announced via OPAL.

      Short Presentation

      The date and time of the short presentations will be announced via OPAL.

      In your short presentation, you will provide a brief overview on your selected topic.

      This includes the following aspects:

      1. What is in your topic?
      2. Which literature sources did you research so far?
      3. What is your idea for a demonstration?

      Following your short presentations, the advisors will provide you with feedback and hints for your full presentations.

      Hints for your Presentation

      • As a rule of thumb, you should plan 2 minutes per slide. A significantly higher number of slides per minute exceeds the perceptive capacity of your audience.
      • Prior to your presentation, you should consider the following points: What is the main message of my presentaion? What should the listeners take away?
        Your presentation should be created based on these considerations.
      • The following site provides many good hints: http://www.garrreynolds.com/preso-tips/

      Seminar Days

      The date and time of the seminar opening meeting will be announced via OPAL.

      Report

      • Important hints on citing:
        • Any statement which does not originate from the author has to be provided with a reference to the original source.
        • "When to Cite Sources" - a very good overview by the Princeton University
        • Examples for correct citation can be found in the IEEE-citation reference
        • Web resources are cited with author, title and date including URL and Request date. For example:
          • [...] M. Nottingham and R. Sayre. (2005). The Atom Syndication Format - Request for Comments: 4287 [Online]. Available: http://www.ietf.org/rfc/rfc4287.txt (18.02.2008).
          • [...] Microsoft. (2015). Microsoft Azure Homepage [Online]. Available: http://azure.microsoft.com/ (23.09.2015).
          • A url should be a hyperlink, if it is technically possible. (clickable)
      • Further important hints for the submission of your written report:
        • Use apart from justifiable exceptions (for instance highlight of text using <strong>...</strong>) only HTML elements which occur in the template. The CSS file provides may not be changed.
        • Before submitting your work, carefully check spelling and grammar, preferably with software support, for example with the spell checker of Microsoft Word.
        • Make sure that your HTML5 source code has no errors. To check your HTML5 source code, use the online validator of W3.org
        • For submission compress all necessary files (HTML, CSS, images) using a ZIP or TAR.GZ.

      Review

      • Each seminar participant has to review exactly three reports. The reviews are not anonymous.
      • Use the review forms provided in the VSR Seminar Workflow, one per report.
      • Following the review phase, each seminar participant will receive the three peer reviews of his or her report and, if necessary, additional comments by the advisors. You will then have one more week to improve your report according to the received feedback.
      • The seminar grade will consider the final report.
        All comments in the reviews are for improving the text and therefore in the interest of the author.

      Press Articles