Jump to main content Hotkeys
Distributed and Self-organizing Systems
Distributed and Self-organizing Systems
Pro-/Haupt- und Forschungsseminar VSR (SS 2024)


Pro-/Haupt- und Forschungsseminar VSR (SS 2024)

Welcome to the homepage of the Pro-/Haupt- und Forschungsseminar Web Engineering

This website contains all important information about the seminar, including links to available topics as well as information about the seminar process in general.

The interdisciplinary research area Web Engineering develops approaches for the methodological construction of Web-based applications and distributed systems as well as their continuous development (evolution). For instance, Web Engineering deals with the development of interoperable Web Services, the implementation of web portals using service-oriented architectures (SOA), fully accessible user interfaces or even exotic web-based applications that are voice controlled via the telephone or that are represented on TV and Radio.

The following steps are necessary to complete the seminar:

  • Preparation of a presentation about the topic assigned to you.
  • An additional written report of your topic.
  • Each report is reviewed by other particpants.

Seminar chairs

samuel

gaedke


Contact

If you have any questions concerning this seminar or the exam as a participant, please contact us via OPAL.

We also offer a Feedback system, where you can provide anonymous feedback for a partiular session to the presenter on what you liked or where we can improve.

Participants

The seminar is offered for students of the following programmes (for pre-requisites, please refer to your study regulations):

Students who are interested in the Web Engineering Seminar (applies only to Master Web Engineering) will find all information here.

If your programme is not listed here, please contact us prior to seminar registration and indicate your study programme, the version (year) of your study regulations (Prüfungsordnungsversion) and the module number (Modulnummer) to allow us to check whether we can offer the seminar for you and find an appropriate mapping.

Registration

You may only participate after registration in the Seminar Course in OPAL

The registration opens on 21.03.2024 and ends on 07.04.2024 at 23:59. As the available slots are usually rather quickly booked, we recommend to complete your registration early after registration opens.

Topics and Advisors

Questions:

  • How can you include Forgiveness and Regret in a Content Trust Model?
  • Why would these concepts enhance the Content Trust model?

Questions:

  • What are GOMS/KLM Models? How do they work? Why are they used? What is the (data) basis on which they were created?
  • For what kinds of user interfaces can they be or have they been applied? What are their limitations?
  • Apply GOMS modeling to real world examples (e.g. vsr website) and demonstrate how they can be used to improve these interfaces.

Literature:

  • https://cogulator.io/
  • https://syntagm.co.uk/design/klmcalc.shtml
  • https://www.cogtool.org/
  • Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, N.J. : L. Erlbaum Associates.
  • Kieras, D. (1997). A guide to GOMS model usability evaluation using NGOMSL (Chapter 31). In M. Helander, T.K. Landauer & P.V. Prabhu (Eds.), Handbook of Human-Computer Interaction. Amsterdam: North-Holland Elsevier Science Publishers. Kim,
  • John, B. and Kieras, D. The GOMS family of user interface analysis techniques: comparison and contrast. ACM TOCHI, 3 (4). 1996. 320-351.
  • John, B. E. (2010). CogTool: Predictive human performance modeling by demonstration. 19th Annual Conference on Behavior Representation in Modeling and Simulation 2010, BRiMS 2010, 308–309.

Questions:

  • What are replication studies? Why are replication studies important? To what situation does the term "replication crisis" refer to and in which fields within computer science research has it been applied?
  • Find existing replication studies in Web Engineering and Software Engineering. What is replicated in them and how? Are there differences to replication studies in other fields (e.g. psychology, biology)?
  • Did the replication studies confirm the initial results? What were the problems?

Literature:

  • Cockburn, A., Dragicevic, P., Besançon, L., & Gutwin, C. (2020). Threats of a replication crisis in empirical computer science. Communications of the ACM, 63(8), 70–79. https://doi.org/10.1145/3360311
  • Echtler, F., & Häußler, M. (2018). Open Source, Open Science, and the Replication Crisis in HCI. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 2018-April, 1–8. https://doi.org/10.1145/3170427.3188395
  • Shepperd, M. (2018). Replication studies considered harmful. Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, 73–76. https://doi.org/10.1145/3183399.3183423
  • Gómez, O. S., Juristo, N., & Vegas, S. (2014). Understanding replication of experiments in software engineering: A classification. Information and Software Technology, 56(8), 1033-1048.
  • Da Silva, F. Q., Suassuna, M., França, A. C. C., Grubb, A. M., Gouveia, T. B., Monteiro, C. V., & dos Santos, I. E. (2014). Replication of empirical studies in software engineering research: a systematic mapping study. Empirical Software Engineering, 19(3), 501-557.
  • Shepperd, M., Ajienka, N., & Counsell, S. (2018). The role and value of replication in empirical software engineering results. Information and Software Technology, 99, 120-132.

Questions:

  • What is signposting?
  • How can signposting be applied to improve machine-readbility of resources on the Web?

Questions:

  • What is the difference between citation, bibliography and bibliometrics? What is their importance for scientific works?
  • Explain and differentiate at least 5 common citation styles for computer science (IEEE, APA, ACM, ...). Show this with examples.
  • What are rules and expectations for a bibliography?
  • Which tools/programms can be used for citations and bibliography while writing a paper?
  • What metrics can be used for bibliography? What are its differences? Where are their limits? Show these on examples.

Literature:

Questions:

  • What are known security vulnerabilities or exploits possible via WASM?
  • In which form is WASM's sandbox protecting a user(-agent), but also any server from malicous executions?
  • Test for typical frauds (e.g. (D)DOS, Sniffing, Session Hijacking, Foreign Connection Establishing, Code Injection)

Literature:

  • own research

Questions:

  • What are typical characteristics of HTTP versions?
  • TCP, UDP, QUIC? What is used how below HTTP and how does it influence the protocol and its usage?
  • A technical deep dive on demonstrators for HTTP version comparison with focus on HTTP/3 is expected.

Literature:

  • own research

Questions:

  • What are RDFSurfaces? Give a technical deep dive.
  • How can RDFSurfaces help to improve decentralized knowledge graphs?

Questions:

  • Create a corpus of Customer Service Chatbots that you analyze on certain criteria. The corpus should include at least 20 different chatbots (40 for Hauptseminar). For this you will get access to a gitlab repository in which you can save your results. Build a strategy on how to systematically search for these customer service chatbots and describe this methodology.
  • The corpus should include at least information about the corporation, it's field, the source, how the chatbot introduces itself, if available information on the trained data, possible access restrictions, information on the user interface, etc.
  • Additionally have a look at the code and categorize it. The following questions can be used as an idea for criteria and added on - use the available literature to formulate specific criteria: Is the code openly accessible and readable? Are third party chatbots used? If yes, which ones are used? Are the chatbots included via API-call etc. from the outside? How are they called on in the code?

Literature:

  • own research
  • Adamopoulou, E., Moussiades, L. (2020). An Overview of Chatbot Technology. In: Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds) Artificial Intelligence Applications and Innovations. AIAI 2020. IFIP Advances in Information and Communication Technology, vol 584. Springer, Cham. https://doi.org/10.1007/978-3-030-49186-4_31
  • Adamopoulou, Eleni, and Lefteris Moussiades. "Chatbots: History, technology, and applications." Machine Learning with Applications 2 (2020): 100006. https://doi.org/10.1016/j.mlwa.2020.100006
  • Akma, N., Hafiz, M., Zainal, A., Fairuz, M., Adnan, Z.: Review of chatbots design techniques. Int. J. Comput. Appl. 181, 7–10 (2018). https://doi.org/10.5120/ijca2018917606
  • M. Baez, F. Daniel, F. Casati and B. Benatallah, "Chatbot Integration in Few Patterns," in IEEE Internet Computing, vol. 25, no. 3, pp. 52-59, 1 May-June 2021, doi: 10.1109/MIC.2020.3024605.

Questions:

  • Intro: Prompt engineering is an emerging discipline for developing and optimizing prompts to efficiently use language models (LLMs) to enable or assist applications and research topics in a wide variety of domains, including web engineering.
  • Shortly introduce the basics of prompting.
  • Explain and demonstrate common prompting techniques.
  • Explain and (if already possible) demonstrate more complex prompting techniques currently used or discussed.
  • Review currently available libraries and tools useful for prompt engineers.

Literature:

    Questions:

    • Intro: As a data engineer and web-developer it is inevitable to understand the concepts of data privacy and licensing and its significance in today’s data- and software-driven world.
    • Introduction and basic definitions: data, processing,
    • Personal Data: definition, types of personal data, processing principles, anonymization and pseudo-anonymization, mixed data, documentation, data subjects’ rights
    • Introduction and basic definitions: licenses
    • Open Sources Licenses: definition, types of Open Source Licenses, overview of the most popular examples of Open Source Licenses with the respective conditions, compatibility of licenses (for instance in context of data fusion), review available websites and tools for Open Source License Management, including demonstrations
    • Optional: - Discuss some legal challenges that arise for training data usage in connection with AI-models
    • Optional: - Name and shortly explain current or planned legislation for AI applications

    Literature:

      Questions:

      • What does the Term Open Science mean? How many scientific works are published in this way?
      • Look at the following terms: Open Access, Open Data and Open Source? Give an overview over each topic, what is different to the traditional publishing process and what is important for each step.
      • What is the difference to the FAIR principles? What are differences to publishing works on researchgate, arxiv, etc.?
      • How does the review process work in Open Access Publishing? What are funding possibilities?
      • For the demonstration, show which platforms are available for each aspect from TU Chemnitz for publishing a paper and where you may need to use another platform. Also give a brief overview over similar platforms and tools.

      Questions:

      • What are Dark Patterns? Where can they be found? How are they defined? Use recent taxonomies to give a short overview.
      • What is the current state of the art of Dark Patterns in Online Shopping? Conduct a literature research and present your results: How many publications are available? When were they published? Which methodology was used to define the found Dark Patterns? Which ones were found and how were they categorized?
      • Choose 10 well-known online retailers (amazon, temu, ...) and analyze them on their use of Dark Patterns. Create a codebook (can be a table) where you list the retailers, the situations in which you find possible Dark Patterns and how these Dark Patterns could be categorized. For this, first prepare the list of retailers and the situations in which possible Dark Patterns may occur. After that, code this list according to a Dark Patterns taxonomy. Both students should code the samples independently and then meet together with the advisor to discuss the results and unclear cases. Present the consolidated results in a demonstration.
      • For the coding regard the following questoins: Is it an established Dark Pattern? Which one? Is it a potential new Dark Pattern? Is information missing for a decision?
      • Reflect on the process: What did you learn? Where did you have problems? How could you solve them?

      Literature:

      • Own research
      • Colin M. Gray, Nataliia Bielova, Cristiana Santos, and Thomas Mildner. 2023. An Ontology of Dark Patterns: Foundations, Definitions, and a Structure for Transdisciplinary Action. https://arxiv.org/abs/2309.09640
      • Brignull, H.: Deceptive patterns. https://www.deceptive.design
      • Gray, C.M., Sanchez Chamorro, L., Obi, I., Duane, J.N.: Mapping the landscape of dark patterns scholarship: A systematic literature review. In: Companion Pub- lication of the 2023 ACM Designing Interactive Systems Conference. pp. 188–193 (2023)
      • Mathur, A., Kshirsagar, M., Mayer, J.: What makes a dark pattern... dark? Design attributes, normative considerations, and measurement methods. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1–18 (2021
      • Arunesh Mathur, Gunes Acar, Michael J. Friedman, Eli Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan. 2019. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 81 (November 2019), 32 pages. https://doi.org/10.1145/3359183
      • Sin, R., Harris, T., Nilsson, S., & Beck, T. (2022). Dark patterns in online shopping: do they work and can nudges help mitigate impulse buying? Behavioural Public Policy, 1–27. doi:10.1017/bpp.2022.11
      • Voigt, C., Schlögl, S., Groth, A. (2021). Dark Patterns in Online Shopping: of Sneaky Tricks, Perceived Annoyance and Respective Brand Trust. In: Nah, F.FH., Siau, K. (eds) HCI in Business, Government and Organizations. HCII 2021. Lecture Notes in Computer Science(), vol 12783. Springer, Cham. https://doi.org/10.1007/978-3-030-77750-0_10

      Questions:

      • How does Bun compare to its competitors? What does Bun do differently in terms of functionality?
      • How does the architecture of Bun differ from Node.js and Deno?
      • Compare the three runtimes - Deno, Bun and Node.js - by running different types of Benchmarks. Measure performance and memory consumption!

      Literature:

      • Own research

      Questions:

      • What are Interaction Design Patterns? What are Patterns? How are they connected? Explain their structure on an example (including several patterns).
      • What are their benefits for Web Engineers and Users?
      • Collect a dataset of 20 websites with chatbots and evaluate your interaction with the chatbot in a systematic way. Present your idea for a methodology in the Short Presentation. Map your interactions to the list of Interaction Design Patterns by Tidwell et al. Present and analyse your results: Which patterns did you find? How often did you find each pattern? Did you find some in an adapted form? How was it adapted? Did you find interactions which are not yet listed by Tidwell et al? Describe them in the same schema.

      Literature:

      • Own research
      • J. Tidwell, C. Brewer, and A. Valencia, Designing interfaces: patterns for effective interaction design, Third edition. Beijing [China] ; North Sebastopol, CA: O’Reilly, 2020. (available via uni library)
      • Preprint (will be provided) from V. Traubinger and M. Gaedke: Interaction Design Patterns of Web Chatbots
      • C. Alexander, S. Ishikawa, M. Silverstein, M. Jacobson, I. F. King, and S. Angel, A Pattern Language: Towns, Buildings, Construction. in Center for Environmental Structure series, no. v. 2. New York: Oxford University Press, 1977.
      • A. Shevat, Designing bots: creating conversational experiences, First edition. Beijing ; Boston: O’Reilly, 2017.
      • F. A. M. Valério, T. G. Guimarães, R. O. Prates, and H. Candello, “Chatbots Explain Themselves: Designers’ Strategies for Conveying Chatbot Features to Users,” JIS, vol. 9, no. 3, p. 1, Dec. 2018, doi: 10.5753/jis.2018.710.

      Questions:

      • What are web crawlers? How do they work? What are they used for? How do different crawler techniques work? Show us some examples.
      • The use case here is to find websites with chatbots: Create a concept for the web crawler architecture. How can you find different types of chatbots? What should you look out for on a website? Present your concept.
      • Implement the Web Crawler and evaluate the found pages with a meaningful analysis.

      Literature:

      • Own research
      • Waheed, N., Ikram, M., Hashmi, S.S., He, X., Nanda, P. (2022). An Empirical Assessment of Security and Privacy Risks of Web-Based Chatbots. In: Chbeir, R., Huang, H., Silvestri, F., Manolopoulos, Y., Zhang, Y. (eds) Web Information Systems Engineering – WISE 2022. WISE 2022. Lecture Notes in Computer Science, vol 13724. Springer, Cham. https://doi.org/10.1007/978-3-031-20891-1_23
      • Khder, M. A. (2021). Web scraping or web crawling: State of art, techniques, approaches and application. International Journal of Advances in Soft Computing & Its Applications, 13(3).

      Questions:

      • What are Dark Patterns? What are their characteristics? Show us some examples!
      • How can Dark Patterns be automatically detected? Make a Systematic Literature Research to find relevant publications and analyse them: Which patterns can be automatically detected? Which methods are used for the automatic detection? What is the role of AI and how did it change over time? Present these and other relevant results.
      • How can these general algorithms be used on chatbots? Create some requirements and analyse the collected algorithms. Where would they have to be changed? How? Where do you see challenges? Create an overview (a guide) for your results.

      Literature:

      • Own research
      • Brignull, H.: Deceptive patterns. https://www.deceptive.design
      • Colin M. Gray, Nataliia Bielova, Cristiana Santos, and Thomas Mildner. 2023. An Ontology of Dark Patterns: Foundations, Definitions, and a Structure for Transdisciplinary Action. https://arxiv.org/abs/2309.09640
      • Adamopoulou, E., Moussiades, L. (2020). An Overview of Chatbot Technology. In: Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds) Artificial Intelligence Applications and Innovations. AIAI 2020. IFIP Advances in Information and Communication Technology, vol 584. Springer, Cham. https://doi.org/10.1007/978-3-030-49186-4_31
      • Adamopoulou, Eleni, and Lefteris Moussiades. "Chatbots: History, technology, and applications." Machine Learning with Applications 2 (2020): 100006. https://doi.org/10.1016/j.mlwa.2020.100006
      • A. Shevat, Designing bots: creating conversational experiences, First edition. Beijing ; Boston: O’Reilly, 2017.
      • Traubinger, V., Heil, S., Grigera, J., Garrido, A., Gaedke, M. (2024). In Search of Dark Patterns in Chatbots. In: Følstad, A., et al. Chatbot Research and Design. CONVERSATIONS 2023. Lecture Notes in Computer Science, vol 14524. Springer, Cham. https://doi.org/10.1007/978-3-031-54975-5_7
      • Definition for Requirements: IEEE: IEEE Standard Glossary of Software Engineering Terminology , IEEE Standard 610.12 1990, IEEE, New York, 1983.

      Questions:

      • Knowledge graphs (KGs), a term coined by Google in 2012 to refer to its general-purpose knowledge base, are critical to both: they reduce the need for large labelled Machine Learning datasets; facilitate transfer learning; and generate explanations. KGs are used in many industrial AI applications, including digital twins, enterprise data management, supply chain management, procurement, and regulatory compliance.
      • Give a short introduction into the general approach and the state-of-the-art for Knowledge Graphs, Trustworthy and Trustworthy in KGs.
      • How can we make Knowledge Graphs Trustworthy?
      • What are the trustworthiness measurement for KGs and untrustworthy patterns in Knowledge Graphs?
      • Give 3 different examples of trustworthy KGs.
      • Demonstrate your findings by giving examples of trustworthy KGs in comparison with examples of untrustworthy patterns in KGs across fields of your choice.

      Literature:

      • Jia, S., Xiang, Y., Chen, X., & Wang, K. (2019, May). Triple trustworthiness measurement for knowledge graph. In The World Wide Web Conference (pp. 2865-2871).
      • Yan, Y., Yu, P., Fang, H., & Wu, Z. (2022, June). Trustworthiness Measurement for Multimedia Domain Knowledge Graph. In 2022 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB) (pp. 1-5). IEEE.
      • Huber, R. and Klump, J.: The Dark Side of the Knowledge Graph - How Can We Make Knowledge Graphs Trustworthy?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13071, https://doi.org/10.5194/egusphere-egu2020-13071, 2020.
      • Ge, Y., Ma, J., Zhang, L., Li, X., & Lu, H. (2023). Trustworthiness-aware knowledge graph representation for recommendation. Knowledge-Based Systems, 278, 110865.

      Questions:

      • • Give a short introduction of the explainable AI (XAI) and Knowledge Graphs (KGs) and an short introduction of the role of KGs in XAI.
      • • State and compare the respective opportunities and challenges of KGs in XAI.
      • • Where is potential for an integration of both approaches and what would be the benefit of such synergies for various applications? (For ex. What is the benefits of using KGs in explainable recommendation systems)
      • • Explain some ways KGs could be or already have been used to improve on XAI.
      • • Give a more detailed review, supplemented by a short demonstration of one successful realization of one combined approach.
      • • What are future trends in the XAI field?

      Literature:

      • 1. Lecue, F. (2020). On the role of knowledge graphs in explainable AI. Semantic Web, 11(1), 41-51.
      • 2. Rajabi, E., & Etminani, K. (2022). Knowledge-graph-based explainable AI: A systematic review. Journal of Information Science, 01655515221112844.
      • 3. Tiddi, I., & Schlobach, S. (2022). Knowledge graphs as tools for explainable machine learning: A survey. Artificial Intelligence, 302, 103627.

      Questions:

      • Explainable recommendation refers to personalized recommendation algorithms that address the problem of why – they not only provide users or system designers with recommendation results, but also explanations to clarify why such items are recommended.
      • • Give a short introduction into the general approach and the state-of-the-art for Explainable Recommendation Systems.
      • • What are the fairness measurement for Explainable Recommendation Systems.
      • • How can we obtain Fairness- aware Explainable recommendations using KGs?
      • • How can we mitigate the provider fairness in an Explainable Recommendation Systems?
      • • Give a more detailed review, supplemented by a short demonstration of one successful realization of one combined approach.1. Dinnissen, K., & Bauer, C. (2022). Fairness in music recommender systems: A stakeholder-centered mini review. Frontiers in big Data, 5, 913608.

      Literature:

      • 1. Dinnissen, K., & Bauer, C. (2022). Fairness in music recommender systems: A stakeholder-centered mini review. Frontiers in big Data, 5, 913608.
      • 2. Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., & He, X. (2020). Bias and debias in recommender system: a survey and future directions. CoRR abs/2010.03240 (2020). arXiv preprint arXiv:2010.03240.
      • 3. Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., ... & Pizzato, L. (2020). Multistakeholder recommendation: Survey and research directions. User Modeling and User-Adapted Interaction, 30, 127-158.
      • 4. Balloccu, G., Boratto, L., Fenu, G., Marras, M. (2022). Hands on explainable recommender systems with knowledge graphs. In Proceedings of the 16th ACM Conference on Recommender Systems, pp. 710-713.

      Questions:

      • • What is trustworthy AI? What are the characteristics that need to exist in AI to be trustworthy? How to measure the trustworthiness of AI?
      • • Why is trustworthy AI recommended when it is used in the healthcare domain rather than standard AI?
      • • Demonstrate the differences between trustworthy and untrustworthy AI by showing an example of both AI’s in chatbots.

      Questions:

      • • What is machine unlearning? And in which domains can it be applied?
      • • What algorithms are used in machine unlearning? Explain the differences between Machine learning and Machine unlearning.
      • • Find a tutorial related to machine unlearning implementation and demonstrate it.

      Questions:

      • Introdcution: Prompt engineering involves crafting text in a manner that can be comprehended and processed by a generative AI model. A prompt serves as the natural language description outlining the task to be executed by the AI. This process is integral to effectively instructing AI systems, enhancing their understanding and performance in various applications.
      • - An introduction to prompt engineering
      • - Best practices in prompt engineering
      • - Prompt Engineering with multimodal data, e.g. images, tables, audio, video, web pages, etc. How prompt engineering differs when handled with different data types
      • - Experiments on prompt engineering with multimodal data, e.g. images, tables, audio, video, web pages, etc.
      • - What are the issues and open challenges in prompt engineering with multimodal data.

      Literature:

        Questions:

        • Introduction: logy refers to a structured framework for organizing and categorizing knowledge about a particular domain. t involves defining concepts, their properties, and the relationships between them in a systematic manner, often represented in a formal language such as OWL (Web Ontology Language) or RDF (Resource Description Framework). Ontologies are used to facilitate knowledge sharing, data integration, and reasoning within specific domains, ranging from medicine and biology to finance and engineering. Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text across diverse domains. Leveraging LLMs for ontology construction involves utilizing their natural language processing abilities to extract and organize knowledge from textual sources, thereby automating parts of the ontology engineering process.
        • - Give an introduction to LLMs and Ontologies
        • - How does LLMs help in creating ontologies
        • - What are the issues and open challenges in construction of ontologies using the traditional way.
        • - What are the issues and open challenges in construction of ontologies using LLMs.
        • - Provide the current trends and applications in advancing ontology construction using LLMs

        Literature:

          Questions:

          • What is the current state of voice user interface research? Identify relevant publication venues, classify/group existing published approaches and identify research directions for future research as well as tools and platforms that support the creation of voice user interfaces.
          • Which approaches specifically address the automatic assessment, evaluation and testing of voice user interfaces or voice interactions? Which methods do they use? What kinds of inputs do they require? Which results/predictions/assessments do they produce?
          • What are quality and performance metrics for voice user interfaces? Identify existing measurement strategies and metrics that allow the evaluation and comparison of voice user interfaces.

          Questions:

          • Identify approaches published in scientific literature and existing tools/frameworks that make use of WebAssembly for Code Mobility, to execute code written in one single language on server and client side, or outside the browser and briefly describe them.
          • Try to identify groups of approaches with similar architecture/purpose/technology.
          • Prepare a demo applying at least one of the approaches to a scenario application to showcase the potential benefits.

          Literature:

          • Mäkitalo, N., Mikkonen, T., Pautasso, C., Bankowski, V., Daubaris, P., Mikkola, R., Beletski, O.: WebAssembly Modules as Lightweight Containers for Liquid IoT Applications. In: Proc. of ICWE2021. pp. 328–336. Springer, Cham (2021).
          • Wen, Elliott, and Gerald Weber. "Wasmachine: Bring iot up to speed with a webassembly os." 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 2020.
          • Ménétrey, Jämes, et al. "WebAssembly as a Common Layer for the Cloud-edge Continuum." Proceedings of the 2nd Workshop on Flexible Resource and Application Management on the Edge. 2022.
          • Koren, István. "A standalone webassembly development environment for the internet of things." Web Engineering: 21st International Conference, ICWE 2021, Biarritz, France, May 18–21, 2021, Proceedings. Cham: Springer International Publishing, 2021.
          • Hoque, Mohammed Nurul, and Khaled A. Harras. "WebAssembly for Edge Computing: Potential and Challenges." IEEE Communications Standards Magazine 6.4 (2022): 68-73.
          • WASI https://wasi.dev/
          • .NET Blazor
          • https://www.thinktecture.com/blazor/unterschiede-blazor-webassembly-blazor-server/

          Questions:

          • What differentiates personal knowledge graphs from conventional knowledge graphs?
          • What are common use cases for personal knowledge graphs?
          • How can access control for personal knowledge graphs be realized?
          • How can personal knowledge graphs be integrated into a Semantic Web application?

          Literature:

          • M. G. Skjæveland, K. Balog, N. Bernard, W. Łajewska, and T. Linjordet, “An ecosystem for personal knowledge graphs: A survey and research roadmap,” AI Open, vol. 5, pp. 55–69, Jan. 2024, doi: 10.1016/j.aiopen.2024.01.003.
          • P. Chakraborty and D. K. Sanyal, “A comprehensive survey of personal knowledge graphs,” WIREs Data Mining and Knowledge Discovery, vol. 13, no. 6, p. e1513, 2023, doi: 10.1002/widm.1513.
          • K. Balog and T. Kenter, “Personal Knowledge Graphs: A Research Agenda,” in Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, in ICTIR ’19. New York, NY, USA: Association for Computing Machinery, Sep. 2019, pp. 217–220. doi: 10.1145/3341981.3344241.
          • own research

          Seminar Opening

          The date and time of the seminar opening meeting will be announced via OPAL.

          Short Presentation

          The date and time of the short presentations will be announced via OPAL.

          In your short presentation, you will provide a brief overview on your selected topic.

          This includes the following aspects:

          1. What is in your topic?
          2. Which literature sources did you research so far?
          3. What is your idea for a demonstration?

          Following your short presentations, the advisors will provide you with feedback and hints for your full presentations.

          Hints for your Presentation

          • As a rule of thumb, you should plan 2 minutes per slide. A significantly higher number of slides per minute exceeds the perceptive capacity of your audience.
          • Prior to your presentation, you should consider the following points: What is the main message of my presentaion? What should the listeners take away?
            Your presentation should be created based on these considerations.
          • The following site provides many good hints: http://www.garrreynolds.com/preso-tips/

          Seminar Days

          The date and time of the seminar opening meeting will be announced via OPAL.

          Report

          • Important hints on citing:
            • Any statement which does not originate from the author has to be provided with a reference to the original source.
            • "When to Cite Sources" - a very good overview by the Princeton University
            • Examples for correct citation can be found in the IEEE-citation reference
            • Web resources are cited with author, title and date including URL and Request date. For example:
              • [...] M. Nottingham and R. Sayre. (2005). The Atom Syndication Format - Request for Comments: 4287 [Online]. Available: http://www.ietf.org/rfc/rfc4287.txt (18.02.2008).
              • [...] Microsoft. (2015). Microsoft Azure Homepage [Online]. Available: http://azure.microsoft.com/ (23.09.2015).
              • A url should be a hyperlink, if it is technically possible. (clickable)
          • Further important hints for the submission of your written report:
            • Use apart from justifiable exceptions (for instance highlight of text using <strong>...</strong>) only HTML elements which occur in the template. The CSS file provides may not be changed.
            • Before submitting your work, carefully check spelling and grammar, preferably with software support, for example with the spell checker of Microsoft Word.
            • Make sure that your HTML5 source code has no errors. To check your HTML5 source code, use the online validator of W3.org
            • For submission compress all necessary files (HTML, CSS, images) using a ZIP or TAR.GZ.

          Review

          • Each seminar participant has to review exactly three reports. The reviews are not anonymous.
          • Following the review phase, each seminar participant will receive the three peer reviews of his or her report and, if necessary, additional comments by the advisors. You will then have one more week to improve your report according to the received feedback.
          • The seminar grade will consider the final report.
            All comments in the reviews are for improving the text and therefore in the interest of the author.

          Press Articles